00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3922 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3517 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.067 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.068 The recommended git tool is: git 00:00:00.069 using credential 00000000-0000-0000-0000-000000000002 00:00:00.070 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.106 Fetching changes from the remote Git repository 00:00:00.108 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.177 Using shallow fetch with depth 1 00:00:00.177 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.177 > git --version # timeout=10 00:00:00.240 > git --version # 'git version 2.39.2' 00:00:00.240 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.286 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.286 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.311 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.323 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.336 Checking out Revision bc56972291bf21b4d2a602b495a165146a8d67a1 (FETCH_HEAD) 00:00:04.336 > git config core.sparsecheckout # timeout=10 00:00:04.347 > git read-tree -mu HEAD # timeout=10 00:00:04.362 > git checkout -f bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=5 00:00:04.379 Commit message: "jenkins/jjb-config: Remove extendedChoice from ipxe-test-images" 00:00:04.379 > git rev-list --no-walk bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=10 00:00:04.481 [Pipeline] Start of Pipeline 00:00:04.495 [Pipeline] library 00:00:04.497 Loading library shm_lib@master 00:00:04.497 Library shm_lib@master is cached. Copying from home. 00:00:04.515 [Pipeline] node 00:00:04.530 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:04.532 [Pipeline] { 00:00:04.543 [Pipeline] catchError 00:00:04.544 [Pipeline] { 00:00:04.556 [Pipeline] wrap 00:00:04.564 [Pipeline] { 00:00:04.573 [Pipeline] stage 00:00:04.575 [Pipeline] { (Prologue) 00:00:04.594 [Pipeline] echo 00:00:04.596 Node: VM-host-SM38 00:00:04.603 [Pipeline] cleanWs 00:00:04.615 [WS-CLEANUP] Deleting project workspace... 00:00:04.615 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.622 [WS-CLEANUP] done 00:00:04.810 [Pipeline] setCustomBuildProperty 00:00:04.883 [Pipeline] httpRequest 00:00:05.411 [Pipeline] echo 00:00:05.413 Sorcerer 10.211.164.101 is alive 00:00:05.421 [Pipeline] retry 00:00:05.422 [Pipeline] { 00:00:05.435 [Pipeline] httpRequest 00:00:05.439 HttpMethod: GET 00:00:05.440 URL: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:05.441 Sending request to url: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:05.447 Response Code: HTTP/1.1 200 OK 00:00:05.448 Success: Status code 200 is in the accepted range: 200,404 00:00:05.448 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:07.227 [Pipeline] } 00:00:07.242 [Pipeline] // retry 00:00:07.248 [Pipeline] sh 00:00:07.532 + tar --no-same-owner -xf jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:07.544 [Pipeline] httpRequest 00:00:07.936 [Pipeline] echo 00:00:07.938 Sorcerer 10.211.164.101 is alive 00:00:07.945 [Pipeline] retry 00:00:07.946 [Pipeline] { 00:00:07.954 [Pipeline] httpRequest 00:00:07.958 HttpMethod: GET 00:00:07.958 URL: http://10.211.164.101/packages/spdk_92108e0a2be7a969e8ee761a776a1ea64465759a.tar.gz 00:00:07.959 Sending request to url: http://10.211.164.101/packages/spdk_92108e0a2be7a969e8ee761a776a1ea64465759a.tar.gz 00:00:07.961 Response Code: HTTP/1.1 200 OK 00:00:07.961 Success: Status code 200 is in the accepted range: 200,404 00:00:07.962 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_92108e0a2be7a969e8ee761a776a1ea64465759a.tar.gz 00:00:27.462 [Pipeline] } 00:00:27.480 [Pipeline] // retry 00:00:27.488 [Pipeline] sh 00:00:27.778 + tar --no-same-owner -xf spdk_92108e0a2be7a969e8ee761a776a1ea64465759a.tar.gz 00:00:30.332 [Pipeline] sh 00:00:30.618 + git -C spdk log --oneline -n5 00:00:30.618 92108e0a2 fsdev/aio: add support for null IOs 00:00:30.618 dcdab59d3 lib/reduce: Check return code of read superblock 00:00:30.618 95d9d27f7 bdev/nvme: controller failover/multipath doc change 00:00:30.618 f366dac4a bdev/nvme: removed 'multipath' param from spdk_bdev_nvme_create() 00:00:30.618 aa7c3b1e2 bdev/nvme: changed default config to multipath 00:00:30.639 [Pipeline] withCredentials 00:00:30.655 > git --version # timeout=10 00:00:30.671 > git --version # 'git version 2.39.2' 00:00:30.690 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:30.692 [Pipeline] { 00:00:30.702 [Pipeline] retry 00:00:30.704 [Pipeline] { 00:00:30.718 [Pipeline] sh 00:00:31.007 + git ls-remote http://dpdk.org/git/dpdk main 00:00:31.282 [Pipeline] } 00:00:31.300 [Pipeline] // retry 00:00:31.305 [Pipeline] } 00:00:31.321 [Pipeline] // withCredentials 00:00:31.330 [Pipeline] httpRequest 00:00:31.736 [Pipeline] echo 00:00:31.738 Sorcerer 10.211.164.101 is alive 00:00:31.747 [Pipeline] retry 00:00:31.749 [Pipeline] { 00:00:31.763 [Pipeline] httpRequest 00:00:31.769 HttpMethod: GET 00:00:31.770 URL: http://10.211.164.101/packages/dpdk_e7bc451c996b5882c5d8267725f3d88118009c75.tar.gz 00:00:31.770 Sending request to url: http://10.211.164.101/packages/dpdk_e7bc451c996b5882c5d8267725f3d88118009c75.tar.gz 00:00:31.780 Response Code: HTTP/1.1 200 OK 00:00:31.781 Success: Status code 200 is in the accepted range: 200,404 00:00:31.782 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_e7bc451c996b5882c5d8267725f3d88118009c75.tar.gz 00:00:40.276 [Pipeline] } 00:00:40.290 [Pipeline] // retry 00:00:40.297 [Pipeline] sh 00:00:40.582 + tar --no-same-owner -xf dpdk_e7bc451c996b5882c5d8267725f3d88118009c75.tar.gz 00:00:41.983 [Pipeline] sh 00:00:42.270 + git -C dpdk log --oneline -n5 00:00:42.270 e7bc451c99 trace: disable traces at compilation 00:00:42.270 dbdf3d5581 timer: override CPU TSC frequency with OS value 00:00:42.270 7268f21aa0 timer: improve TSC estimation accuracy 00:00:42.270 8df71650e9 drivers: remove more redundant newline in Marvell drivers 00:00:42.270 41b09d64e3 eal/x86: fix 32-bit write combining store 00:00:42.289 [Pipeline] writeFile 00:00:42.304 [Pipeline] sh 00:00:42.591 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:42.604 [Pipeline] sh 00:00:42.889 + cat autorun-spdk.conf 00:00:42.889 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:42.889 SPDK_TEST_NVME=1 00:00:42.889 SPDK_TEST_FTL=1 00:00:42.889 SPDK_TEST_ISAL=1 00:00:42.889 SPDK_RUN_ASAN=1 00:00:42.889 SPDK_RUN_UBSAN=1 00:00:42.889 SPDK_TEST_XNVME=1 00:00:42.889 SPDK_TEST_NVME_FDP=1 00:00:42.889 SPDK_TEST_NATIVE_DPDK=main 00:00:42.889 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:00:42.889 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:42.898 RUN_NIGHTLY=1 00:00:42.900 [Pipeline] } 00:00:42.914 [Pipeline] // stage 00:00:42.928 [Pipeline] stage 00:00:42.930 [Pipeline] { (Run VM) 00:00:42.942 [Pipeline] sh 00:00:43.229 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:43.229 + echo 'Start stage prepare_nvme.sh' 00:00:43.229 Start stage prepare_nvme.sh 00:00:43.229 + [[ -n 4 ]] 00:00:43.229 + disk_prefix=ex4 00:00:43.229 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:00:43.229 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:00:43.229 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:00:43.229 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:43.229 ++ SPDK_TEST_NVME=1 00:00:43.229 ++ SPDK_TEST_FTL=1 00:00:43.229 ++ SPDK_TEST_ISAL=1 00:00:43.229 ++ SPDK_RUN_ASAN=1 00:00:43.229 ++ SPDK_RUN_UBSAN=1 00:00:43.229 ++ SPDK_TEST_XNVME=1 00:00:43.229 ++ SPDK_TEST_NVME_FDP=1 00:00:43.229 ++ SPDK_TEST_NATIVE_DPDK=main 00:00:43.229 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:00:43.229 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:43.229 ++ RUN_NIGHTLY=1 00:00:43.229 + cd /var/jenkins/workspace/nvme-vg-autotest 00:00:43.229 + nvme_files=() 00:00:43.229 + declare -A nvme_files 00:00:43.229 + backend_dir=/var/lib/libvirt/images/backends 00:00:43.229 + nvme_files['nvme.img']=5G 00:00:43.229 + nvme_files['nvme-cmb.img']=5G 00:00:43.229 + nvme_files['nvme-multi0.img']=4G 00:00:43.229 + nvme_files['nvme-multi1.img']=4G 00:00:43.229 + nvme_files['nvme-multi2.img']=4G 00:00:43.229 + nvme_files['nvme-openstack.img']=8G 00:00:43.229 + nvme_files['nvme-zns.img']=5G 00:00:43.229 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:43.230 + (( SPDK_TEST_FTL == 1 )) 00:00:43.230 + nvme_files["nvme-ftl.img"]=6G 00:00:43.230 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:43.230 + nvme_files["nvme-fdp.img"]=1G 00:00:43.230 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:43.230 + for nvme in "${!nvme_files[@]}" 00:00:43.230 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:00:43.230 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:43.230 + for nvme in "${!nvme_files[@]}" 00:00:43.230 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-ftl.img -s 6G 00:00:44.175 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:44.175 + for nvme in "${!nvme_files[@]}" 00:00:44.175 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:00:44.175 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:44.175 + for nvme in "${!nvme_files[@]}" 00:00:44.175 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:00:44.175 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:44.175 + for nvme in "${!nvme_files[@]}" 00:00:44.175 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:00:44.175 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:44.175 + for nvme in "${!nvme_files[@]}" 00:00:44.175 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:00:44.175 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:44.175 + for nvme in "${!nvme_files[@]}" 00:00:44.175 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:00:44.175 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:44.175 + for nvme in "${!nvme_files[@]}" 00:00:44.175 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-fdp.img -s 1G 00:00:44.436 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:44.436 + for nvme in "${!nvme_files[@]}" 00:00:44.436 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:00:45.008 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:45.008 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:00:45.008 + echo 'End stage prepare_nvme.sh' 00:00:45.008 End stage prepare_nvme.sh 00:00:45.022 [Pipeline] sh 00:00:45.308 + DISTRO=fedora39 00:00:45.308 + CPUS=10 00:00:45.308 + RAM=12288 00:00:45.308 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:45.308 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex4-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:00:45.308 00:00:45.308 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:00:45.308 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:00:45.308 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:00:45.308 HELP=0 00:00:45.308 DRY_RUN=0 00:00:45.308 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,/var/lib/libvirt/images/backends/ex4-nvme-fdp.img, 00:00:45.308 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:45.308 NVME_AUTO_CREATE=0 00:00:45.308 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,, 00:00:45.308 NVME_CMB=,,,, 00:00:45.308 NVME_PMR=,,,, 00:00:45.308 NVME_ZNS=,,,, 00:00:45.308 NVME_MS=true,,,, 00:00:45.308 NVME_FDP=,,,on, 00:00:45.308 SPDK_VAGRANT_DISTRO=fedora39 00:00:45.308 SPDK_VAGRANT_VMCPU=10 00:00:45.308 SPDK_VAGRANT_VMRAM=12288 00:00:45.308 SPDK_VAGRANT_PROVIDER=libvirt 00:00:45.308 SPDK_VAGRANT_HTTP_PROXY= 00:00:45.308 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:45.308 SPDK_OPENSTACK_NETWORK=0 00:00:45.308 VAGRANT_PACKAGE_BOX=0 00:00:45.308 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:45.308 FORCE_DISTRO=true 00:00:45.308 VAGRANT_BOX_VERSION= 00:00:45.308 EXTRA_VAGRANTFILES= 00:00:45.308 NIC_MODEL=e1000 00:00:45.308 00:00:45.308 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:00:45.308 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:00:47.859 Bringing machine 'default' up with 'libvirt' provider... 00:00:48.121 ==> default: Creating image (snapshot of base box volume). 00:00:48.383 ==> default: Creating domain with the following settings... 00:00:48.383 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1728383528_71fa90506b488427d8c4 00:00:48.383 ==> default: -- Domain type: kvm 00:00:48.383 ==> default: -- Cpus: 10 00:00:48.383 ==> default: -- Feature: acpi 00:00:48.383 ==> default: -- Feature: apic 00:00:48.383 ==> default: -- Feature: pae 00:00:48.383 ==> default: -- Memory: 12288M 00:00:48.383 ==> default: -- Memory Backing: hugepages: 00:00:48.383 ==> default: -- Management MAC: 00:00:48.383 ==> default: -- Loader: 00:00:48.383 ==> default: -- Nvram: 00:00:48.383 ==> default: -- Base box: spdk/fedora39 00:00:48.383 ==> default: -- Storage pool: default 00:00:48.383 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1728383528_71fa90506b488427d8c4.img (20G) 00:00:48.383 ==> default: -- Volume Cache: default 00:00:48.383 ==> default: -- Kernel: 00:00:48.383 ==> default: -- Initrd: 00:00:48.383 ==> default: -- Graphics Type: vnc 00:00:48.383 ==> default: -- Graphics Port: -1 00:00:48.383 ==> default: -- Graphics IP: 127.0.0.1 00:00:48.383 ==> default: -- Graphics Password: Not defined 00:00:48.383 ==> default: -- Video Type: cirrus 00:00:48.383 ==> default: -- Video VRAM: 9216 00:00:48.383 ==> default: -- Sound Type: 00:00:48.383 ==> default: -- Keymap: en-us 00:00:48.383 ==> default: -- TPM Path: 00:00:48.383 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:48.383 ==> default: -- Command line args: 00:00:48.383 ==> default: -> value=-device, 00:00:48.383 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:48.383 ==> default: -> value=-drive, 00:00:48.383 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:00:48.383 ==> default: -> value=-device, 00:00:48.383 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:00:48.383 ==> default: -> value=-device, 00:00:48.383 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:48.383 ==> default: -> value=-drive, 00:00:48.383 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-1-drive0, 00:00:48.383 ==> default: -> value=-device, 00:00:48.383 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:48.383 ==> default: -> value=-device, 00:00:48.383 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:00:48.383 ==> default: -> value=-drive, 00:00:48.383 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:00:48.383 ==> default: -> value=-device, 00:00:48.383 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:48.383 ==> default: -> value=-drive, 00:00:48.383 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:00:48.383 ==> default: -> value=-device, 00:00:48.383 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:48.383 ==> default: -> value=-drive, 00:00:48.383 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:00:48.383 ==> default: -> value=-device, 00:00:48.383 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:48.383 ==> default: -> value=-device, 00:00:48.383 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:00:48.383 ==> default: -> value=-device, 00:00:48.383 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:00:48.383 ==> default: -> value=-drive, 00:00:48.383 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:00:48.383 ==> default: -> value=-device, 00:00:48.383 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:48.645 ==> default: Creating shared folders metadata... 00:00:48.645 ==> default: Starting domain. 00:00:50.563 ==> default: Waiting for domain to get an IP address... 00:01:08.701 ==> default: Waiting for SSH to become available... 00:01:08.701 ==> default: Configuring and enabling network interfaces... 00:01:12.002 default: SSH address: 192.168.121.56:22 00:01:12.002 default: SSH username: vagrant 00:01:12.002 default: SSH auth method: private key 00:01:13.918 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:22.061 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:27.358 ==> default: Mounting SSHFS shared folder... 00:01:29.276 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:29.276 ==> default: Checking Mount.. 00:01:30.663 ==> default: Folder Successfully Mounted! 00:01:30.663 00:01:30.663 SUCCESS! 00:01:30.663 00:01:30.663 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:30.663 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:30.663 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:30.663 00:01:30.674 [Pipeline] } 00:01:30.689 [Pipeline] // stage 00:01:30.698 [Pipeline] dir 00:01:30.699 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:30.701 [Pipeline] { 00:01:30.713 [Pipeline] catchError 00:01:30.714 [Pipeline] { 00:01:30.727 [Pipeline] sh 00:01:31.059 + vagrant ssh-config --host vagrant 00:01:31.059 + sed -ne '/^Host/,$p' 00:01:31.059 + tee ssh_conf 00:01:33.604 Host vagrant 00:01:33.604 HostName 192.168.121.56 00:01:33.604 User vagrant 00:01:33.604 Port 22 00:01:33.604 UserKnownHostsFile /dev/null 00:01:33.604 StrictHostKeyChecking no 00:01:33.604 PasswordAuthentication no 00:01:33.604 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:33.604 IdentitiesOnly yes 00:01:33.604 LogLevel FATAL 00:01:33.604 ForwardAgent yes 00:01:33.604 ForwardX11 yes 00:01:33.604 00:01:33.619 [Pipeline] withEnv 00:01:33.621 [Pipeline] { 00:01:33.634 [Pipeline] sh 00:01:33.919 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:33.919 source /etc/os-release 00:01:33.919 [[ -e /image.version ]] && img=$(< /image.version) 00:01:33.919 # Minimal, systemd-like check. 00:01:33.919 if [[ -e /.dockerenv ]]; then 00:01:33.919 # Clear garbage from the node'\''s name: 00:01:33.919 # agt-er_autotest_547-896 -> autotest_547-896 00:01:33.919 # $HOSTNAME is the actual container id 00:01:33.919 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:33.919 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:33.919 # We can assume this is a mount from a host where container is running, 00:01:33.919 # so fetch its hostname to easily identify the target swarm worker. 00:01:33.919 container="$(< /etc/hostname) ($agent)" 00:01:33.919 else 00:01:33.919 # Fallback 00:01:33.919 container=$agent 00:01:33.919 fi 00:01:33.919 fi 00:01:33.919 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:33.919 ' 00:01:34.194 [Pipeline] } 00:01:34.210 [Pipeline] // withEnv 00:01:34.219 [Pipeline] setCustomBuildProperty 00:01:34.233 [Pipeline] stage 00:01:34.235 [Pipeline] { (Tests) 00:01:34.252 [Pipeline] sh 00:01:34.537 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:34.813 [Pipeline] sh 00:01:35.098 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:35.376 [Pipeline] timeout 00:01:35.377 Timeout set to expire in 50 min 00:01:35.379 [Pipeline] { 00:01:35.392 [Pipeline] sh 00:01:35.675 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:01:36.247 HEAD is now at 92108e0a2 fsdev/aio: add support for null IOs 00:01:36.261 [Pipeline] sh 00:01:36.543 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:01:36.819 [Pipeline] sh 00:01:37.104 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:37.383 [Pipeline] sh 00:01:37.670 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:01:37.932 ++ readlink -f spdk_repo 00:01:37.932 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:37.932 + [[ -n /home/vagrant/spdk_repo ]] 00:01:37.932 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:37.932 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:37.932 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:37.932 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:37.932 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:37.932 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:37.932 + cd /home/vagrant/spdk_repo 00:01:37.932 + source /etc/os-release 00:01:37.932 ++ NAME='Fedora Linux' 00:01:37.932 ++ VERSION='39 (Cloud Edition)' 00:01:37.932 ++ ID=fedora 00:01:37.932 ++ VERSION_ID=39 00:01:37.932 ++ VERSION_CODENAME= 00:01:37.932 ++ PLATFORM_ID=platform:f39 00:01:37.932 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:37.932 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:37.932 ++ LOGO=fedora-logo-icon 00:01:37.932 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:37.932 ++ HOME_URL=https://fedoraproject.org/ 00:01:37.932 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:37.932 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:37.932 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:37.932 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:37.932 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:37.932 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:37.932 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:37.932 ++ SUPPORT_END=2024-11-12 00:01:37.932 ++ VARIANT='Cloud Edition' 00:01:37.932 ++ VARIANT_ID=cloud 00:01:37.932 + uname -a 00:01:37.932 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:37.932 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:38.193 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:38.455 Hugepages 00:01:38.455 node hugesize free / total 00:01:38.455 node0 1048576kB 0 / 0 00:01:38.455 node0 2048kB 0 / 0 00:01:38.455 00:01:38.455 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:38.456 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:38.456 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:38.456 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:01:38.717 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:01:38.717 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:38.717 + rm -f /tmp/spdk-ld-path 00:01:38.717 + source autorun-spdk.conf 00:01:38.717 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:38.717 ++ SPDK_TEST_NVME=1 00:01:38.717 ++ SPDK_TEST_FTL=1 00:01:38.717 ++ SPDK_TEST_ISAL=1 00:01:38.717 ++ SPDK_RUN_ASAN=1 00:01:38.717 ++ SPDK_RUN_UBSAN=1 00:01:38.717 ++ SPDK_TEST_XNVME=1 00:01:38.717 ++ SPDK_TEST_NVME_FDP=1 00:01:38.717 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:38.717 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:38.717 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:38.717 ++ RUN_NIGHTLY=1 00:01:38.717 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:38.717 + [[ -n '' ]] 00:01:38.717 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:38.717 + for M in /var/spdk/build-*-manifest.txt 00:01:38.717 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:38.717 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:38.717 + for M in /var/spdk/build-*-manifest.txt 00:01:38.717 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:38.717 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:38.717 + for M in /var/spdk/build-*-manifest.txt 00:01:38.717 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:38.717 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:38.717 ++ uname 00:01:38.717 + [[ Linux == \L\i\n\u\x ]] 00:01:38.717 + sudo dmesg -T 00:01:38.717 + sudo dmesg --clear 00:01:38.717 + dmesg_pid=5761 00:01:38.717 + [[ Fedora Linux == FreeBSD ]] 00:01:38.717 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:38.717 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:38.718 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:38.718 + sudo dmesg -Tw 00:01:38.718 + [[ -x /usr/src/fio-static/fio ]] 00:01:38.718 + export FIO_BIN=/usr/src/fio-static/fio 00:01:38.718 + FIO_BIN=/usr/src/fio-static/fio 00:01:38.718 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:38.718 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:38.718 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:38.718 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:38.718 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:38.718 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:38.718 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:38.718 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:38.718 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:38.718 Test configuration: 00:01:38.718 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:38.718 SPDK_TEST_NVME=1 00:01:38.718 SPDK_TEST_FTL=1 00:01:38.718 SPDK_TEST_ISAL=1 00:01:38.718 SPDK_RUN_ASAN=1 00:01:38.718 SPDK_RUN_UBSAN=1 00:01:38.718 SPDK_TEST_XNVME=1 00:01:38.718 SPDK_TEST_NVME_FDP=1 00:01:38.718 SPDK_TEST_NATIVE_DPDK=main 00:01:38.718 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:38.718 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:38.718 RUN_NIGHTLY=1 10:32:59 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:01:38.718 10:32:59 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:38.718 10:32:59 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:38.718 10:32:59 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:38.718 10:32:59 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:38.718 10:32:59 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:38.718 10:32:59 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.718 10:32:59 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.718 10:32:59 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.718 10:32:59 -- paths/export.sh@5 -- $ export PATH 00:01:38.718 10:32:59 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:38.980 10:32:59 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:38.980 10:32:59 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:38.980 10:32:59 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728383579.XXXXXX 00:01:38.980 10:32:59 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728383579.QKCbgR 00:01:38.980 10:32:59 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:38.980 10:32:59 -- common/autobuild_common.sh@492 -- $ '[' -n main ']' 00:01:38.980 10:32:59 -- common/autobuild_common.sh@493 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:38.980 10:32:59 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:01:38.980 10:32:59 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:38.980 10:32:59 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:38.980 10:32:59 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:38.980 10:32:59 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:38.980 10:32:59 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.980 10:32:59 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:01:38.980 10:32:59 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:38.980 10:32:59 -- pm/common@17 -- $ local monitor 00:01:38.980 10:32:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:38.980 10:32:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:38.980 10:32:59 -- pm/common@25 -- $ sleep 1 00:01:38.980 10:32:59 -- pm/common@21 -- $ date +%s 00:01:38.980 10:32:59 -- pm/common@21 -- $ date +%s 00:01:38.980 10:32:59 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728383579 00:01:38.980 10:32:59 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728383579 00:01:38.980 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728383579_collect-vmstat.pm.log 00:01:38.980 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728383579_collect-cpu-load.pm.log 00:01:39.922 10:33:00 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:39.922 10:33:00 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:39.922 10:33:00 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:39.922 10:33:00 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:39.922 10:33:00 -- spdk/autobuild.sh@16 -- $ date -u 00:01:39.922 Tue Oct 8 10:33:00 AM UTC 2024 00:01:39.922 10:33:00 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:39.922 v25.01-pre-41-g92108e0a2 00:01:39.922 10:33:00 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:39.922 10:33:00 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:39.922 10:33:00 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:39.922 10:33:00 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:39.922 10:33:00 -- common/autotest_common.sh@10 -- $ set +x 00:01:39.922 ************************************ 00:01:39.922 START TEST asan 00:01:39.922 ************************************ 00:01:39.922 using asan 00:01:39.922 10:33:00 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:01:39.922 00:01:39.922 real 0m0.001s 00:01:39.922 user 0m0.000s 00:01:39.922 sys 0m0.000s 00:01:39.922 ************************************ 00:01:39.922 END TEST asan 00:01:39.922 10:33:00 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:39.922 10:33:00 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:39.922 ************************************ 00:01:39.922 10:33:00 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:39.922 10:33:00 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:39.922 10:33:00 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:39.922 10:33:00 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:39.922 10:33:00 -- common/autotest_common.sh@10 -- $ set +x 00:01:39.922 ************************************ 00:01:39.922 START TEST ubsan 00:01:39.922 ************************************ 00:01:39.922 using ubsan 00:01:39.922 10:33:00 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:39.922 00:01:39.922 real 0m0.000s 00:01:39.922 user 0m0.000s 00:01:39.922 sys 0m0.000s 00:01:39.922 ************************************ 00:01:39.922 END TEST ubsan 00:01:39.922 ************************************ 00:01:39.922 10:33:00 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:39.922 10:33:00 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:39.922 10:33:00 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:01:39.922 10:33:00 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:39.922 10:33:00 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:39.922 10:33:00 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:01:39.922 10:33:00 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:39.922 10:33:00 -- common/autotest_common.sh@10 -- $ set +x 00:01:39.922 ************************************ 00:01:39.922 START TEST build_native_dpdk 00:01:39.922 ************************************ 00:01:39.922 10:33:00 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:01:39.922 10:33:00 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:40.182 10:33:00 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:40.182 10:33:00 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:40.182 10:33:00 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:40.182 10:33:00 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:40.182 10:33:00 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:40.182 10:33:00 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:40.182 10:33:00 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:40.182 10:33:00 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:40.182 10:33:00 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:40.182 10:33:00 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:40.182 10:33:00 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:40.182 10:33:00 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:40.182 10:33:00 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:40.182 10:33:00 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:01:40.182 10:33:00 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:40.182 10:33:00 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:01:40.182 10:33:00 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:01:40.182 10:33:00 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:01:40.182 10:33:00 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:01:40.182 e7bc451c99 trace: disable traces at compilation 00:01:40.182 dbdf3d5581 timer: override CPU TSC frequency with OS value 00:01:40.182 7268f21aa0 timer: improve TSC estimation accuracy 00:01:40.182 8df71650e9 drivers: remove more redundant newline in Marvell drivers 00:01:40.182 41b09d64e3 eal/x86: fix 32-bit write combining store 00:01:40.182 10:33:00 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:40.182 10:33:00 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:40.182 10:33:00 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.11.0-rc0 00:01:40.182 10:33:00 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:40.182 10:33:00 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:40.182 10:33:00 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:40.182 10:33:00 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:40.182 10:33:00 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:40.182 10:33:00 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:40.182 10:33:00 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:40.182 10:33:00 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:40.182 10:33:00 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:40.182 10:33:00 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:40.182 10:33:00 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:40.182 10:33:00 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:01:40.182 10:33:00 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:40.182 10:33:00 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:40.182 10:33:00 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.11.0-rc0 21.11.0 00:01:40.182 10:33:00 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc0 '<' 21.11.0 00:01:40.182 10:33:00 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:40.182 10:33:00 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:40.182 10:33:00 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:40.182 10:33:00 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:40.182 10:33:00 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:40.182 10:33:00 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:40.182 10:33:00 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:40.182 10:33:00 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:01:40.182 10:33:00 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:40.182 10:33:00 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:40.182 10:33:00 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:40.182 10:33:00 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:40.182 10:33:00 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:40.182 10:33:00 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:40.182 10:33:00 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:01:40.182 10:33:00 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:40.182 10:33:00 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:40.182 10:33:00 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:40.182 10:33:00 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:01:40.182 10:33:00 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:01:40.182 10:33:00 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:01:40.182 10:33:00 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:40.182 10:33:00 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:01:40.182 10:33:00 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:01:40.182 10:33:00 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:40.182 10:33:00 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:40.182 10:33:00 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:40.182 patching file config/rte_config.h 00:01:40.183 Hunk #1 succeeded at 71 (offset 12 lines). 00:01:40.183 10:33:00 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 24.11.0-rc0 24.07.0 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc0 '<' 24.07.0 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:40.183 10:33:00 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 24.11.0-rc0 24.07.0 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 24.11.0-rc0 '>=' 24.07.0 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:40.183 10:33:00 build_native_dpdk -- scripts/common.sh@367 -- $ return 0 00:01:40.183 10:33:00 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:01:40.183 patching file drivers/bus/pci/linux/pci_uio.c 00:01:40.183 10:33:00 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:01:40.183 10:33:00 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:01:40.183 10:33:00 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:01:40.183 10:33:00 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:40.183 10:33:00 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:44.392 The Meson build system 00:01:44.392 Version: 1.5.0 00:01:44.392 Source dir: /home/vagrant/spdk_repo/dpdk 00:01:44.392 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:01:44.392 Build type: native build 00:01:44.392 Program cat found: YES (/usr/bin/cat) 00:01:44.392 Project name: DPDK 00:01:44.392 Project version: 24.11.0-rc0 00:01:44.392 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:44.392 C linker for the host machine: gcc ld.bfd 2.40-14 00:01:44.392 Host machine cpu family: x86_64 00:01:44.392 Host machine cpu: x86_64 00:01:44.392 Message: ## Building in Developer Mode ## 00:01:44.392 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:44.392 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:01:44.392 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:01:44.392 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:01:44.392 Program cat found: YES (/usr/bin/cat) 00:01:44.392 config/meson.build:120: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:44.392 Compiler for C supports arguments -march=native: YES 00:01:44.392 Checking for size of "void *" : 8 00:01:44.392 Checking for size of "void *" : 8 (cached) 00:01:44.392 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:44.392 Library m found: YES 00:01:44.392 Library numa found: YES 00:01:44.392 Has header "numaif.h" : YES 00:01:44.392 Library fdt found: NO 00:01:44.392 Library execinfo found: NO 00:01:44.392 Has header "execinfo.h" : YES 00:01:44.392 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:44.392 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:44.392 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:44.392 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:44.392 Run-time dependency openssl found: YES 3.1.1 00:01:44.392 Run-time dependency libpcap found: YES 1.10.4 00:01:44.392 Has header "pcap.h" with dependency libpcap: YES 00:01:44.392 Compiler for C supports arguments -Wcast-qual: YES 00:01:44.392 Compiler for C supports arguments -Wdeprecated: YES 00:01:44.392 Compiler for C supports arguments -Wformat: YES 00:01:44.392 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:44.392 Compiler for C supports arguments -Wformat-security: NO 00:01:44.392 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:44.392 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:44.392 Compiler for C supports arguments -Wnested-externs: YES 00:01:44.392 Compiler for C supports arguments -Wold-style-definition: YES 00:01:44.392 Compiler for C supports arguments -Wpointer-arith: YES 00:01:44.392 Compiler for C supports arguments -Wsign-compare: YES 00:01:44.392 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:44.392 Compiler for C supports arguments -Wundef: YES 00:01:44.392 Compiler for C supports arguments -Wwrite-strings: YES 00:01:44.392 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:44.392 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:44.392 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:44.392 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:44.392 Program objdump found: YES (/usr/bin/objdump) 00:01:44.392 Compiler for C supports arguments -mavx512f: YES 00:01:44.392 Checking if "AVX512 checking" compiles: YES 00:01:44.392 Fetching value of define "__SSE4_2__" : 1 00:01:44.392 Fetching value of define "__AES__" : 1 00:01:44.392 Fetching value of define "__AVX__" : 1 00:01:44.392 Fetching value of define "__AVX2__" : 1 00:01:44.392 Fetching value of define "__AVX512BW__" : 1 00:01:44.392 Fetching value of define "__AVX512CD__" : 1 00:01:44.392 Fetching value of define "__AVX512DQ__" : 1 00:01:44.392 Fetching value of define "__AVX512F__" : 1 00:01:44.392 Fetching value of define "__AVX512VL__" : 1 00:01:44.392 Fetching value of define "__PCLMUL__" : 1 00:01:44.392 Fetching value of define "__RDRND__" : 1 00:01:44.392 Fetching value of define "__RDSEED__" : 1 00:01:44.392 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:44.392 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:44.392 Message: lib/log: Defining dependency "log" 00:01:44.392 Message: lib/kvargs: Defining dependency "kvargs" 00:01:44.392 Message: lib/argparse: Defining dependency "argparse" 00:01:44.392 Message: lib/telemetry: Defining dependency "telemetry" 00:01:44.392 Checking for function "getentropy" : NO 00:01:44.392 Message: lib/eal: Defining dependency "eal" 00:01:44.392 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:01:44.392 Message: lib/ring: Defining dependency "ring" 00:01:44.392 Message: lib/rcu: Defining dependency "rcu" 00:01:44.392 Message: lib/mempool: Defining dependency "mempool" 00:01:44.392 Message: lib/mbuf: Defining dependency "mbuf" 00:01:44.392 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:44.392 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:44.392 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:44.392 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:44.392 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:44.392 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:44.392 Compiler for C supports arguments -mpclmul: YES 00:01:44.392 Compiler for C supports arguments -maes: YES 00:01:44.392 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:44.392 Compiler for C supports arguments -mavx512bw: YES 00:01:44.392 Compiler for C supports arguments -mavx512dq: YES 00:01:44.392 Compiler for C supports arguments -mavx512vl: YES 00:01:44.392 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:44.392 Compiler for C supports arguments -mavx2: YES 00:01:44.392 Compiler for C supports arguments -mavx: YES 00:01:44.392 Message: lib/net: Defining dependency "net" 00:01:44.392 Message: lib/meter: Defining dependency "meter" 00:01:44.392 Message: lib/ethdev: Defining dependency "ethdev" 00:01:44.392 Message: lib/pci: Defining dependency "pci" 00:01:44.392 Message: lib/cmdline: Defining dependency "cmdline" 00:01:44.392 Message: lib/metrics: Defining dependency "metrics" 00:01:44.392 Message: lib/hash: Defining dependency "hash" 00:01:44.392 Message: lib/timer: Defining dependency "timer" 00:01:44.392 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:44.392 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:44.392 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:44.392 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:44.392 Message: lib/acl: Defining dependency "acl" 00:01:44.392 Message: lib/bbdev: Defining dependency "bbdev" 00:01:44.392 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:44.392 Run-time dependency libelf found: YES 0.191 00:01:44.392 Message: lib/bpf: Defining dependency "bpf" 00:01:44.392 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:44.392 Message: lib/compressdev: Defining dependency "compressdev" 00:01:44.392 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:44.392 Message: lib/distributor: Defining dependency "distributor" 00:01:44.392 Message: lib/dmadev: Defining dependency "dmadev" 00:01:44.392 Message: lib/efd: Defining dependency "efd" 00:01:44.392 Message: lib/eventdev: Defining dependency "eventdev" 00:01:44.392 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:44.392 Message: lib/gpudev: Defining dependency "gpudev" 00:01:44.392 Message: lib/gro: Defining dependency "gro" 00:01:44.392 Message: lib/gso: Defining dependency "gso" 00:01:44.392 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:44.392 Message: lib/jobstats: Defining dependency "jobstats" 00:01:44.392 Message: lib/latencystats: Defining dependency "latencystats" 00:01:44.392 Message: lib/lpm: Defining dependency "lpm" 00:01:44.392 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:44.392 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:44.392 Fetching value of define "__AVX512IFMA__" : 1 00:01:44.392 Message: lib/member: Defining dependency "member" 00:01:44.392 Message: lib/pcapng: Defining dependency "pcapng" 00:01:44.392 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:44.392 Message: lib/power: Defining dependency "power" 00:01:44.393 Message: lib/rawdev: Defining dependency "rawdev" 00:01:44.393 Message: lib/regexdev: Defining dependency "regexdev" 00:01:44.393 Message: lib/mldev: Defining dependency "mldev" 00:01:44.393 Message: lib/rib: Defining dependency "rib" 00:01:44.393 Message: lib/reorder: Defining dependency "reorder" 00:01:44.393 Message: lib/sched: Defining dependency "sched" 00:01:44.393 Message: lib/security: Defining dependency "security" 00:01:44.393 Message: lib/stack: Defining dependency "stack" 00:01:44.393 Has header "linux/userfaultfd.h" : YES 00:01:44.393 Has header "linux/vduse.h" : YES 00:01:44.393 Message: lib/vhost: Defining dependency "vhost" 00:01:44.393 Message: lib/ipsec: Defining dependency "ipsec" 00:01:44.393 Message: lib/pdcp: Defining dependency "pdcp" 00:01:44.393 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:44.393 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:44.393 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:44.393 Message: lib/fib: Defining dependency "fib" 00:01:44.393 Message: lib/port: Defining dependency "port" 00:01:44.393 Message: lib/pdump: Defining dependency "pdump" 00:01:44.393 Message: lib/table: Defining dependency "table" 00:01:44.393 Message: lib/pipeline: Defining dependency "pipeline" 00:01:44.393 Message: lib/graph: Defining dependency "graph" 00:01:44.393 Message: lib/node: Defining dependency "node" 00:01:44.393 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:44.393 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:44.393 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:44.393 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:46.309 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:46.309 Compiler for C supports arguments -Wno-unused-value: YES 00:01:46.309 Compiler for C supports arguments -Wno-format: YES 00:01:46.309 Compiler for C supports arguments -Wno-format-security: YES 00:01:46.309 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:46.309 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:46.309 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:46.309 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:46.309 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:46.309 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:46.309 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:46.309 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:46.309 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:46.309 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:46.309 Has header "sys/epoll.h" : YES 00:01:46.309 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:46.309 Configuring doxy-api-html.conf using configuration 00:01:46.309 Configuring doxy-api-man.conf using configuration 00:01:46.309 Program mandb found: YES (/usr/bin/mandb) 00:01:46.309 Program sphinx-build found: NO 00:01:46.309 Configuring rte_build_config.h using configuration 00:01:46.309 Message: 00:01:46.309 ================= 00:01:46.309 Applications Enabled 00:01:46.309 ================= 00:01:46.309 00:01:46.309 apps: 00:01:46.309 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:46.309 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:46.309 test-pmd, test-regex, test-sad, test-security-perf, 00:01:46.309 00:01:46.309 Message: 00:01:46.309 ================= 00:01:46.309 Libraries Enabled 00:01:46.309 ================= 00:01:46.309 00:01:46.309 libs: 00:01:46.309 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:01:46.309 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:01:46.309 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:01:46.309 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:01:46.309 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:01:46.309 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:01:46.309 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:01:46.309 graph, node, 00:01:46.309 00:01:46.309 Message: 00:01:46.309 =============== 00:01:46.309 Drivers Enabled 00:01:46.309 =============== 00:01:46.309 00:01:46.309 common: 00:01:46.309 00:01:46.309 bus: 00:01:46.309 pci, vdev, 00:01:46.309 mempool: 00:01:46.309 ring, 00:01:46.309 dma: 00:01:46.309 00:01:46.309 net: 00:01:46.309 i40e, 00:01:46.309 raw: 00:01:46.309 00:01:46.309 crypto: 00:01:46.309 00:01:46.309 compress: 00:01:46.309 00:01:46.309 regex: 00:01:46.309 00:01:46.309 ml: 00:01:46.309 00:01:46.309 vdpa: 00:01:46.309 00:01:46.309 event: 00:01:46.309 00:01:46.309 baseband: 00:01:46.309 00:01:46.309 gpu: 00:01:46.309 00:01:46.309 00:01:46.309 Message: 00:01:46.309 ================= 00:01:46.309 Content Skipped 00:01:46.309 ================= 00:01:46.309 00:01:46.309 apps: 00:01:46.309 00:01:46.309 libs: 00:01:46.309 00:01:46.309 drivers: 00:01:46.309 common/cpt: not in enabled drivers build config 00:01:46.309 common/dpaax: not in enabled drivers build config 00:01:46.309 common/iavf: not in enabled drivers build config 00:01:46.309 common/idpf: not in enabled drivers build config 00:01:46.309 common/ionic: not in enabled drivers build config 00:01:46.309 common/mvep: not in enabled drivers build config 00:01:46.309 common/octeontx: not in enabled drivers build config 00:01:46.309 bus/auxiliary: not in enabled drivers build config 00:01:46.309 bus/cdx: not in enabled drivers build config 00:01:46.309 bus/dpaa: not in enabled drivers build config 00:01:46.309 bus/fslmc: not in enabled drivers build config 00:01:46.309 bus/ifpga: not in enabled drivers build config 00:01:46.309 bus/platform: not in enabled drivers build config 00:01:46.309 bus/uacce: not in enabled drivers build config 00:01:46.309 bus/vmbus: not in enabled drivers build config 00:01:46.309 common/cnxk: not in enabled drivers build config 00:01:46.309 common/mlx5: not in enabled drivers build config 00:01:46.309 common/nfp: not in enabled drivers build config 00:01:46.309 common/nitrox: not in enabled drivers build config 00:01:46.309 common/qat: not in enabled drivers build config 00:01:46.309 common/sfc_efx: not in enabled drivers build config 00:01:46.309 mempool/bucket: not in enabled drivers build config 00:01:46.309 mempool/cnxk: not in enabled drivers build config 00:01:46.309 mempool/dpaa: not in enabled drivers build config 00:01:46.309 mempool/dpaa2: not in enabled drivers build config 00:01:46.310 mempool/octeontx: not in enabled drivers build config 00:01:46.310 mempool/stack: not in enabled drivers build config 00:01:46.310 dma/cnxk: not in enabled drivers build config 00:01:46.310 dma/dpaa: not in enabled drivers build config 00:01:46.310 dma/dpaa2: not in enabled drivers build config 00:01:46.310 dma/hisilicon: not in enabled drivers build config 00:01:46.310 dma/idxd: not in enabled drivers build config 00:01:46.310 dma/ioat: not in enabled drivers build config 00:01:46.310 dma/odm: not in enabled drivers build config 00:01:46.310 dma/skeleton: not in enabled drivers build config 00:01:46.310 net/af_packet: not in enabled drivers build config 00:01:46.310 net/af_xdp: not in enabled drivers build config 00:01:46.310 net/ark: not in enabled drivers build config 00:01:46.310 net/atlantic: not in enabled drivers build config 00:01:46.310 net/avp: not in enabled drivers build config 00:01:46.310 net/axgbe: not in enabled drivers build config 00:01:46.310 net/bnx2x: not in enabled drivers build config 00:01:46.310 net/bnxt: not in enabled drivers build config 00:01:46.310 net/bonding: not in enabled drivers build config 00:01:46.310 net/cnxk: not in enabled drivers build config 00:01:46.310 net/cpfl: not in enabled drivers build config 00:01:46.310 net/cxgbe: not in enabled drivers build config 00:01:46.310 net/dpaa: not in enabled drivers build config 00:01:46.310 net/dpaa2: not in enabled drivers build config 00:01:46.310 net/e1000: not in enabled drivers build config 00:01:46.310 net/ena: not in enabled drivers build config 00:01:46.310 net/enetc: not in enabled drivers build config 00:01:46.310 net/enetfec: not in enabled drivers build config 00:01:46.310 net/enic: not in enabled drivers build config 00:01:46.310 net/failsafe: not in enabled drivers build config 00:01:46.310 net/fm10k: not in enabled drivers build config 00:01:46.310 net/gve: not in enabled drivers build config 00:01:46.310 net/hinic: not in enabled drivers build config 00:01:46.310 net/hns3: not in enabled drivers build config 00:01:46.310 net/iavf: not in enabled drivers build config 00:01:46.310 net/ice: not in enabled drivers build config 00:01:46.310 net/idpf: not in enabled drivers build config 00:01:46.310 net/igc: not in enabled drivers build config 00:01:46.310 net/ionic: not in enabled drivers build config 00:01:46.310 net/ipn3ke: not in enabled drivers build config 00:01:46.310 net/ixgbe: not in enabled drivers build config 00:01:46.310 net/mana: not in enabled drivers build config 00:01:46.310 net/memif: not in enabled drivers build config 00:01:46.310 net/mlx4: not in enabled drivers build config 00:01:46.310 net/mlx5: not in enabled drivers build config 00:01:46.310 net/mvneta: not in enabled drivers build config 00:01:46.310 net/mvpp2: not in enabled drivers build config 00:01:46.310 net/netvsc: not in enabled drivers build config 00:01:46.310 net/nfb: not in enabled drivers build config 00:01:46.310 net/nfp: not in enabled drivers build config 00:01:46.310 net/ngbe: not in enabled drivers build config 00:01:46.310 net/ntnic: not in enabled drivers build config 00:01:46.310 net/null: not in enabled drivers build config 00:01:46.310 net/octeontx: not in enabled drivers build config 00:01:46.310 net/octeon_ep: not in enabled drivers build config 00:01:46.310 net/pcap: not in enabled drivers build config 00:01:46.310 net/pfe: not in enabled drivers build config 00:01:46.310 net/qede: not in enabled drivers build config 00:01:46.310 net/ring: not in enabled drivers build config 00:01:46.310 net/sfc: not in enabled drivers build config 00:01:46.310 net/softnic: not in enabled drivers build config 00:01:46.310 net/tap: not in enabled drivers build config 00:01:46.310 net/thunderx: not in enabled drivers build config 00:01:46.310 net/txgbe: not in enabled drivers build config 00:01:46.310 net/vdev_netvsc: not in enabled drivers build config 00:01:46.310 net/vhost: not in enabled drivers build config 00:01:46.310 net/virtio: not in enabled drivers build config 00:01:46.310 net/vmxnet3: not in enabled drivers build config 00:01:46.310 raw/cnxk_bphy: not in enabled drivers build config 00:01:46.310 raw/cnxk_gpio: not in enabled drivers build config 00:01:46.310 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:46.310 raw/ifpga: not in enabled drivers build config 00:01:46.310 raw/ntb: not in enabled drivers build config 00:01:46.310 raw/skeleton: not in enabled drivers build config 00:01:46.310 crypto/armv8: not in enabled drivers build config 00:01:46.310 crypto/bcmfs: not in enabled drivers build config 00:01:46.310 crypto/caam_jr: not in enabled drivers build config 00:01:46.310 crypto/ccp: not in enabled drivers build config 00:01:46.310 crypto/cnxk: not in enabled drivers build config 00:01:46.310 crypto/dpaa_sec: not in enabled drivers build config 00:01:46.310 crypto/dpaa2_sec: not in enabled drivers build config 00:01:46.310 crypto/ionic: not in enabled drivers build config 00:01:46.310 crypto/ipsec_mb: not in enabled drivers build config 00:01:46.310 crypto/mlx5: not in enabled drivers build config 00:01:46.310 crypto/mvsam: not in enabled drivers build config 00:01:46.310 crypto/nitrox: not in enabled drivers build config 00:01:46.310 crypto/null: not in enabled drivers build config 00:01:46.310 crypto/octeontx: not in enabled drivers build config 00:01:46.310 crypto/openssl: not in enabled drivers build config 00:01:46.310 crypto/scheduler: not in enabled drivers build config 00:01:46.310 crypto/uadk: not in enabled drivers build config 00:01:46.310 crypto/virtio: not in enabled drivers build config 00:01:46.310 compress/isal: not in enabled drivers build config 00:01:46.310 compress/mlx5: not in enabled drivers build config 00:01:46.310 compress/nitrox: not in enabled drivers build config 00:01:46.310 compress/octeontx: not in enabled drivers build config 00:01:46.310 compress/uadk: not in enabled drivers build config 00:01:46.310 compress/zlib: not in enabled drivers build config 00:01:46.310 regex/mlx5: not in enabled drivers build config 00:01:46.310 regex/cn9k: not in enabled drivers build config 00:01:46.310 ml/cnxk: not in enabled drivers build config 00:01:46.310 vdpa/ifc: not in enabled drivers build config 00:01:46.310 vdpa/mlx5: not in enabled drivers build config 00:01:46.310 vdpa/nfp: not in enabled drivers build config 00:01:46.310 vdpa/sfc: not in enabled drivers build config 00:01:46.310 event/cnxk: not in enabled drivers build config 00:01:46.310 event/dlb2: not in enabled drivers build config 00:01:46.310 event/dpaa: not in enabled drivers build config 00:01:46.310 event/dpaa2: not in enabled drivers build config 00:01:46.310 event/dsw: not in enabled drivers build config 00:01:46.310 event/opdl: not in enabled drivers build config 00:01:46.310 event/skeleton: not in enabled drivers build config 00:01:46.310 event/sw: not in enabled drivers build config 00:01:46.310 event/octeontx: not in enabled drivers build config 00:01:46.310 baseband/acc: not in enabled drivers build config 00:01:46.310 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:46.310 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:46.310 baseband/la12xx: not in enabled drivers build config 00:01:46.310 baseband/null: not in enabled drivers build config 00:01:46.310 baseband/turbo_sw: not in enabled drivers build config 00:01:46.310 gpu/cuda: not in enabled drivers build config 00:01:46.310 00:01:46.310 00:01:46.310 Build targets in project: 219 00:01:46.310 00:01:46.310 DPDK 24.11.0-rc0 00:01:46.310 00:01:46.310 User defined options 00:01:46.310 libdir : lib 00:01:46.310 prefix : /home/vagrant/spdk_repo/dpdk/build 00:01:46.310 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:46.310 c_link_args : 00:01:46.310 enable_docs : false 00:01:46.310 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:46.310 enable_kmods : false 00:01:46.310 machine : native 00:01:46.310 tests : false 00:01:46.310 00:01:46.310 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:46.310 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:46.310 10:33:06 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:01:46.310 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:01:46.310 [1/718] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:46.310 [2/718] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:46.310 [3/718] Linking static target lib/librte_kvargs.a 00:01:46.310 [4/718] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:46.310 [5/718] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:46.310 [6/718] Linking static target lib/librte_log.a 00:01:46.571 [7/718] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:01:46.571 [8/718] Linking static target lib/librte_argparse.a 00:01:46.571 [9/718] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.571 [10/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:46.571 [11/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:46.571 [12/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:46.571 [13/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:46.571 [14/718] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:46.571 [15/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:46.571 [16/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:46.571 [17/718] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.832 [18/718] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.832 [19/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:46.832 [20/718] Linking target lib/librte_log.so.25.0 00:01:46.832 [21/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:46.832 [22/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:47.098 [23/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:47.098 [24/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:47.098 [25/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:47.098 [26/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:47.098 [27/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:47.098 [28/718] Generating symbol file lib/librte_log.so.25.0.p/librte_log.so.25.0.symbols 00:01:47.098 [29/718] Linking target lib/librte_kvargs.so.25.0 00:01:47.098 [30/718] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:47.360 [31/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:47.360 [32/718] Linking static target lib/librte_telemetry.a 00:01:47.360 [33/718] Linking target lib/librte_argparse.so.25.0 00:01:47.360 [34/718] Generating symbol file lib/librte_kvargs.so.25.0.p/librte_kvargs.so.25.0.symbols 00:01:47.360 [35/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:47.360 [36/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:47.360 [37/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:47.360 [38/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:47.360 [39/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:47.619 [40/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:47.619 [41/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:47.619 [42/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:47.619 [43/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:47.619 [44/718] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.619 [45/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:47.619 [46/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:47.619 [47/718] Linking target lib/librte_telemetry.so.25.0 00:01:47.619 [48/718] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:47.619 [49/718] Generating symbol file lib/librte_telemetry.so.25.0.p/librte_telemetry.so.25.0.symbols 00:01:47.877 [50/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:47.877 [51/718] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:47.877 [52/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:47.877 [53/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:47.877 [54/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:47.877 [55/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:47.877 [56/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:48.157 [57/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:48.157 [58/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:48.157 [59/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:48.157 [60/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:48.157 [61/718] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:48.157 [62/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:48.157 [63/718] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:48.157 [64/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:48.157 [65/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:48.438 [66/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:48.438 [67/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:48.438 [68/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:48.438 [69/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:48.438 [70/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:48.438 [71/718] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:48.438 [72/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:48.438 [73/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:48.697 [74/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:48.697 [75/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:48.697 [76/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:48.697 [77/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:48.697 [78/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:48.697 [79/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:48.697 [80/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:48.697 [81/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:48.697 [82/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:48.697 [83/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:48.955 [84/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:01:48.955 [85/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:48.955 [86/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:48.955 [87/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:48.955 [88/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:48.955 [89/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:49.213 [90/718] Linking static target lib/librte_eal.a 00:01:49.213 [91/718] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:49.213 [92/718] Linking static target lib/librte_ring.a 00:01:49.214 [93/718] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:49.214 [94/718] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:49.214 [95/718] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:49.214 [96/718] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:49.214 [97/718] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.214 [98/718] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:49.472 [99/718] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:49.472 [100/718] Linking static target lib/librte_mempool.a 00:01:49.472 [101/718] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:49.472 [102/718] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:49.472 [103/718] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:49.472 [104/718] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:49.730 [105/718] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:49.730 [106/718] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:49.730 [107/718] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:49.730 [108/718] Linking static target lib/librte_rcu.a 00:01:49.730 [109/718] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:49.730 [110/718] Linking static target lib/librte_meter.a 00:01:49.730 [111/718] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.988 [112/718] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:49.988 [113/718] Linking static target lib/librte_net.a 00:01:49.988 [114/718] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.988 [115/718] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:49.988 [116/718] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:49.988 [117/718] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.988 [118/718] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:49.988 [119/718] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:49.988 [120/718] Linking static target lib/librte_mbuf.a 00:01:49.988 [121/718] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.247 [122/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:50.505 [123/718] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.505 [124/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:50.505 [125/718] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:50.505 [126/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:50.505 [127/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:50.505 [128/718] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:50.763 [129/718] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:50.763 [130/718] Linking static target lib/librte_pci.a 00:01:50.763 [131/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:50.763 [132/718] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.763 [133/718] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:50.763 [134/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:50.763 [135/718] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:51.022 [136/718] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:51.022 [137/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:51.022 [138/718] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:51.022 [139/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:51.022 [140/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:51.022 [141/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:51.022 [142/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:51.022 [143/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:51.022 [144/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:51.022 [145/718] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:51.022 [146/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:51.022 [147/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:51.022 [148/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:51.280 [149/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:51.280 [150/718] Linking static target lib/librte_cmdline.a 00:01:51.280 [151/718] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:51.280 [152/718] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:51.280 [153/718] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:51.539 [154/718] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:51.539 [155/718] Linking static target lib/librte_metrics.a 00:01:51.539 [156/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:51.539 [157/718] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:51.797 [158/718] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.797 [159/718] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.797 [160/718] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:51.797 [161/718] Linking static target lib/librte_timer.a 00:01:51.797 [162/718] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:52.055 [163/718] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:52.055 [164/718] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.055 [165/718] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:52.314 [166/718] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:52.314 [167/718] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:52.571 [168/718] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:52.571 [169/718] Linking static target lib/librte_bitratestats.a 00:01:52.571 [170/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:52.571 [171/718] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:52.571 [172/718] Linking static target lib/librte_bbdev.a 00:01:52.830 [173/718] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.830 [174/718] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:52.830 [175/718] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:52.830 [176/718] Linking static target lib/librte_hash.a 00:01:52.830 [177/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:53.089 [178/718] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:53.089 [179/718] Linking static target lib/acl/libavx2_tmp.a 00:01:53.089 [180/718] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.089 [181/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:53.089 [182/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:53.089 [183/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:53.089 [184/718] Linking static target lib/librte_ethdev.a 00:01:53.346 [185/718] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.346 [186/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:53.346 [187/718] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:53.346 [188/718] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.346 [189/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:53.346 [190/718] Linking target lib/librte_eal.so.25.0 00:01:53.346 [191/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:53.604 [192/718] Generating symbol file lib/librte_eal.so.25.0.p/librte_eal.so.25.0.symbols 00:01:53.604 [193/718] Linking target lib/librte_ring.so.25.0 00:01:53.604 [194/718] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:53.604 [195/718] Linking target lib/librte_meter.so.25.0 00:01:53.604 [196/718] Generating symbol file lib/librte_ring.so.25.0.p/librte_ring.so.25.0.symbols 00:01:53.604 [197/718] Linking target lib/librte_rcu.so.25.0 00:01:53.604 [198/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:53.604 [199/718] Generating symbol file lib/librte_meter.so.25.0.p/librte_meter.so.25.0.symbols 00:01:53.604 [200/718] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:53.604 [201/718] Linking target lib/librte_mempool.so.25.0 00:01:53.604 [202/718] Linking target lib/librte_pci.so.25.0 00:01:53.604 [203/718] Linking target lib/librte_timer.so.25.0 00:01:53.861 [204/718] Generating symbol file lib/librte_rcu.so.25.0.p/librte_rcu.so.25.0.symbols 00:01:53.861 [205/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:53.861 [206/718] Linking static target lib/librte_cfgfile.a 00:01:53.861 [207/718] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:53.861 [208/718] Generating symbol file lib/librte_mempool.so.25.0.p/librte_mempool.so.25.0.symbols 00:01:53.861 [209/718] Generating symbol file lib/librte_pci.so.25.0.p/librte_pci.so.25.0.symbols 00:01:53.861 [210/718] Generating symbol file lib/librte_timer.so.25.0.p/librte_timer.so.25.0.symbols 00:01:53.861 [211/718] Linking target lib/librte_mbuf.so.25.0 00:01:53.861 [212/718] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:53.861 [213/718] Linking static target lib/librte_compressdev.a 00:01:53.861 [214/718] Generating symbol file lib/librte_mbuf.so.25.0.p/librte_mbuf.so.25.0.symbols 00:01:53.861 [215/718] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:01:53.861 [216/718] Linking static target lib/librte_acl.a 00:01:53.861 [217/718] Linking target lib/librte_net.so.25.0 00:01:53.861 [218/718] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.119 [219/718] Linking target lib/librte_bbdev.so.25.0 00:01:54.119 [220/718] Generating symbol file lib/librte_net.so.25.0.p/librte_net.so.25.0.symbols 00:01:54.119 [221/718] Linking target lib/librte_cfgfile.so.25.0 00:01:54.119 [222/718] Linking target lib/librte_cmdline.so.25.0 00:01:54.119 [223/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:54.119 [224/718] Linking target lib/librte_hash.so.25.0 00:01:54.119 [225/718] Linking static target lib/librte_bpf.a 00:01:54.119 [226/718] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:54.119 [227/718] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.119 [228/718] Generating symbol file lib/librte_hash.so.25.0.p/librte_hash.so.25.0.symbols 00:01:54.119 [229/718] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:54.119 [230/718] Linking target lib/librte_acl.so.25.0 00:01:54.376 [231/718] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.376 [232/718] Generating symbol file lib/librte_acl.so.25.0.p/librte_acl.so.25.0.symbols 00:01:54.377 [233/718] Linking target lib/librte_compressdev.so.25.0 00:01:54.377 [234/718] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.377 [235/718] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:54.377 [236/718] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:54.377 [237/718] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:54.377 [238/718] Linking static target lib/librte_distributor.a 00:01:54.377 [239/718] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:54.634 [240/718] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:54.634 [241/718] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.634 [242/718] Linking target lib/librte_distributor.so.25.0 00:01:54.892 [243/718] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:54.892 [244/718] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:54.892 [245/718] Linking static target lib/librte_dmadev.a 00:01:54.892 [246/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:55.150 [247/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:55.150 [248/718] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:55.150 [249/718] Linking static target lib/librte_efd.a 00:01:55.150 [250/718] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.150 [251/718] Linking target lib/librte_dmadev.so.25.0 00:01:55.150 [252/718] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:55.408 [253/718] Linking static target lib/librte_cryptodev.a 00:01:55.408 [254/718] Generating symbol file lib/librte_dmadev.so.25.0.p/librte_dmadev.so.25.0.symbols 00:01:55.408 [255/718] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.408 [256/718] Linking target lib/librte_efd.so.25.0 00:01:55.408 [257/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:55.408 [258/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:55.666 [259/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:55.666 [260/718] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:55.666 [261/718] Linking static target lib/librte_dispatcher.a 00:01:55.924 [262/718] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:55.924 [263/718] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:55.924 [264/718] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:55.924 [265/718] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:55.924 [266/718] Linking static target lib/librte_gpudev.a 00:01:55.924 [267/718] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.924 [268/718] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:56.183 [269/718] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.183 [270/718] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:56.183 [271/718] Linking target lib/librte_cryptodev.so.25.0 00:01:56.183 [272/718] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:56.183 [273/718] Linking static target lib/librte_gro.a 00:01:56.441 [274/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:56.441 [275/718] Generating symbol file lib/librte_cryptodev.so.25.0.p/librte_cryptodev.so.25.0.symbols 00:01:56.441 [276/718] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:56.441 [277/718] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:56.441 [278/718] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:56.441 [279/718] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.441 [280/718] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.700 [281/718] Linking target lib/librte_gpudev.so.25.0 00:01:56.700 [282/718] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:56.700 [283/718] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:56.700 [284/718] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:56.700 [285/718] Linking static target lib/librte_gso.a 00:01:56.700 [286/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:56.700 [287/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:56.700 [288/718] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.700 [289/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:56.958 [290/718] Linking static target lib/librte_eventdev.a 00:01:56.958 [291/718] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.958 [292/718] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:56.958 [293/718] Linking static target lib/librte_jobstats.a 00:01:56.958 [294/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:56.958 [295/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:56.958 [296/718] Linking target lib/librte_ethdev.so.25.0 00:01:56.958 [297/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:56.958 [298/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:56.958 [299/718] Linking static target lib/librte_ip_frag.a 00:01:56.958 [300/718] Generating symbol file lib/librte_ethdev.so.25.0.p/librte_ethdev.so.25.0.symbols 00:01:56.958 [301/718] Linking target lib/librte_metrics.so.25.0 00:01:57.217 [302/718] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.217 [303/718] Generating symbol file lib/librte_metrics.so.25.0.p/librte_metrics.so.25.0.symbols 00:01:57.217 [304/718] Linking target lib/librte_bpf.so.25.0 00:01:57.217 [305/718] Linking target lib/librte_bitratestats.so.25.0 00:01:57.217 [306/718] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:57.217 [307/718] Linking target lib/librte_gro.so.25.0 00:01:57.217 [308/718] Linking target lib/librte_gso.so.25.0 00:01:57.217 [309/718] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.217 [310/718] Linking static target lib/librte_latencystats.a 00:01:57.217 [311/718] Linking target lib/librte_jobstats.so.25.0 00:01:57.217 [312/718] Generating symbol file lib/librte_bpf.so.25.0.p/librte_bpf.so.25.0.symbols 00:01:57.217 [313/718] Linking target lib/librte_ip_frag.so.25.0 00:01:57.217 [314/718] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:57.217 [315/718] Generating symbol file lib/librte_ip_frag.so.25.0.p/librte_ip_frag.so.25.0.symbols 00:01:57.217 [316/718] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:57.475 [317/718] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.475 [318/718] Linking target lib/librte_latencystats.so.25.0 00:01:57.475 [319/718] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:57.475 [320/718] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:57.475 [321/718] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:57.475 [322/718] Linking static target lib/librte_lpm.a 00:01:57.475 [323/718] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:01:57.732 [324/718] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:57.732 [325/718] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:57.732 [326/718] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.732 [327/718] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:57.732 [328/718] Linking static target lib/librte_pcapng.a 00:01:57.732 [329/718] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:57.732 [330/718] Linking target lib/librte_lpm.so.25.0 00:01:57.733 [331/718] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:57.989 [332/718] Generating symbol file lib/librte_lpm.so.25.0.p/librte_lpm.so.25.0.symbols 00:01:57.989 [333/718] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:57.989 [334/718] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:57.989 [335/718] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.989 [336/718] Linking target lib/librte_pcapng.so.25.0 00:01:57.989 [337/718] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:57.989 [338/718] Generating symbol file lib/librte_pcapng.so.25.0.p/librte_pcapng.so.25.0.symbols 00:01:58.247 [339/718] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:58.247 [340/718] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:58.247 [341/718] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:58.247 [342/718] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:58.247 [343/718] Linking static target lib/librte_power.a 00:01:58.247 [344/718] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.247 [345/718] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:58.247 [346/718] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:58.247 [347/718] Linking static target lib/librte_regexdev.a 00:01:58.247 [348/718] Linking target lib/librte_eventdev.so.25.0 00:01:58.247 [349/718] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:58.504 [350/718] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:58.504 [351/718] Generating symbol file lib/librte_eventdev.so.25.0.p/librte_eventdev.so.25.0.symbols 00:01:58.504 [352/718] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:58.504 [353/718] Linking static target lib/librte_member.a 00:01:58.504 [354/718] Linking static target lib/librte_rawdev.a 00:01:58.504 [355/718] Linking target lib/librte_dispatcher.so.25.0 00:01:58.504 [356/718] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:58.504 [357/718] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:58.504 [358/718] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:58.504 [359/718] Linking static target lib/librte_mldev.a 00:01:58.763 [360/718] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.763 [361/718] Linking target lib/librte_member.so.25.0 00:01:58.763 [362/718] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.763 [363/718] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:58.763 [364/718] Linking target lib/librte_power.so.25.0 00:01:58.763 [365/718] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:58.763 [366/718] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.763 [367/718] Linking target lib/librte_rawdev.so.25.0 00:01:58.763 [368/718] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:59.022 [369/718] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.022 [370/718] Linking target lib/librte_regexdev.so.25.0 00:01:59.022 [371/718] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:59.022 [372/718] Linking static target lib/librte_rib.a 00:01:59.022 [373/718] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:59.022 [374/718] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:59.022 [375/718] Linking static target lib/librte_reorder.a 00:01:59.022 [376/718] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:59.022 [377/718] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:59.280 [378/718] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:59.280 [379/718] Linking static target lib/librte_stack.a 00:01:59.280 [380/718] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:59.280 [381/718] Linking static target lib/librte_security.a 00:01:59.280 [382/718] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:59.280 [383/718] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.280 [384/718] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.280 [385/718] Linking target lib/librte_rib.so.25.0 00:01:59.280 [386/718] Linking target lib/librte_reorder.so.25.0 00:01:59.280 [387/718] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.280 [388/718] Linking target lib/librte_stack.so.25.0 00:01:59.280 [389/718] Generating symbol file lib/librte_rib.so.25.0.p/librte_rib.so.25.0.symbols 00:01:59.280 [390/718] Generating symbol file lib/librte_reorder.so.25.0.p/librte_reorder.so.25.0.symbols 00:01:59.538 [391/718] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:59.538 [392/718] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.538 [393/718] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:59.538 [394/718] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:59.538 [395/718] Linking target lib/librte_security.so.25.0 00:01:59.538 [396/718] Generating symbol file lib/librte_security.so.25.0.p/librte_security.so.25.0.symbols 00:01:59.538 [397/718] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:59.538 [398/718] Linking static target lib/librte_sched.a 00:01:59.797 [399/718] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.797 [400/718] Linking target lib/librte_mldev.so.25.0 00:01:59.797 [401/718] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:59.797 [402/718] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.055 [403/718] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:00.055 [404/718] Linking target lib/librte_sched.so.25.0 00:02:00.055 [405/718] Generating symbol file lib/librte_sched.so.25.0.p/librte_sched.so.25.0.symbols 00:02:00.055 [406/718] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:00.055 [407/718] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:00.314 [408/718] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:00.314 [409/718] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:00.314 [410/718] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:00.572 [411/718] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:00.572 [412/718] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:00.572 [413/718] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:00.572 [414/718] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:00.572 [415/718] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:00.830 [416/718] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:00.830 [417/718] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:00.830 [418/718] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:00.830 [419/718] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:00.830 [420/718] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:02:00.830 [421/718] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:00.830 [422/718] Linking static target lib/librte_ipsec.a 00:02:01.089 [423/718] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:01.089 [424/718] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.089 [425/718] Linking target lib/librte_ipsec.so.25.0 00:02:01.089 [426/718] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:01.089 [427/718] Generating symbol file lib/librte_ipsec.so.25.0.p/librte_ipsec.so.25.0.symbols 00:02:01.347 [428/718] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:01.347 [429/718] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:01.347 [430/718] Linking static target lib/librte_fib.a 00:02:01.347 [431/718] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:01.347 [432/718] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:01.606 [433/718] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:01.606 [434/718] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:01.606 [435/718] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:01.606 [436/718] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.606 [437/718] Linking target lib/librte_fib.so.25.0 00:02:01.606 [438/718] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:01.606 [439/718] Linking static target lib/librte_pdcp.a 00:02:01.864 [440/718] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.864 [441/718] Linking target lib/librte_pdcp.so.25.0 00:02:01.864 [442/718] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:01.864 [443/718] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:02.123 [444/718] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:02.123 [445/718] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:02.123 [446/718] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:02.123 [447/718] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:02.381 [448/718] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:02.381 [449/718] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:02.381 [450/718] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:02.381 [451/718] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:02.381 [452/718] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:02.639 [453/718] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:02.639 [454/718] Linking static target lib/librte_port.a 00:02:02.639 [455/718] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:02.639 [456/718] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:02.639 [457/718] Linking static target lib/librte_pdump.a 00:02:02.639 [458/718] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:02.639 [459/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:02.639 [460/718] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:02.897 [461/718] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.897 [462/718] Linking target lib/librte_pdump.so.25.0 00:02:02.897 [463/718] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.897 [464/718] Linking target lib/librte_port.so.25.0 00:02:02.897 [465/718] Generating symbol file lib/librte_port.so.25.0.p/librte_port.so.25.0.symbols 00:02:03.155 [466/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:03.155 [467/718] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:03.155 [468/718] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:02:03.155 [469/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:03.155 [470/718] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:03.155 [471/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:03.155 [472/718] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:03.155 [473/718] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:03.418 [474/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:03.418 [475/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:03.683 [476/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:03.683 [477/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:03.683 [478/718] Linking static target lib/librte_table.a 00:02:03.683 [479/718] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:03.941 [480/718] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:03.941 [481/718] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:03.941 [482/718] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:03.941 [483/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:04.200 [484/718] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.200 [485/718] Linking target lib/librte_table.so.25.0 00:02:04.200 [486/718] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:04.200 [487/718] Generating symbol file lib/librte_table.so.25.0.p/librte_table.so.25.0.symbols 00:02:04.200 [488/718] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:04.200 [489/718] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:04.200 [490/718] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:04.200 [491/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:04.767 [492/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:04.767 [493/718] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:04.767 [494/718] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:04.767 [495/718] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:04.767 [496/718] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:04.767 [497/718] Linking static target lib/librte_graph.a 00:02:04.767 [498/718] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:05.025 [499/718] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:05.025 [500/718] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:05.025 [501/718] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.025 [502/718] Linking target lib/librte_graph.so.25.0 00:02:05.283 [503/718] Generating symbol file lib/librte_graph.so.25.0.p/librte_graph.so.25.0.symbols 00:02:05.283 [504/718] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:05.283 [505/718] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:05.283 [506/718] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:05.283 [507/718] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:05.283 [508/718] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:05.283 [509/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:05.283 [510/718] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:05.541 [511/718] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:05.541 [512/718] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:05.542 [513/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:05.542 [514/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:05.542 [515/718] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:05.542 [516/718] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:05.800 [517/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:05.800 [518/718] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:05.800 [519/718] Linking static target lib/librte_node.a 00:02:05.800 [520/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:05.800 [521/718] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:05.800 [522/718] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:06.059 [523/718] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.059 [524/718] Linking target lib/librte_node.so.25.0 00:02:06.059 [525/718] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:06.059 [526/718] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:06.059 [527/718] Linking static target drivers/librte_bus_vdev.a 00:02:06.059 [528/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:06.059 [529/718] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:06.059 [530/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:06.059 [531/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:06.317 [532/718] Compiling C object drivers/librte_bus_vdev.so.25.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:06.317 [533/718] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.317 [534/718] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:06.317 [535/718] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:06.317 [536/718] Linking static target drivers/librte_bus_pci.a 00:02:06.317 [537/718] Linking target drivers/librte_bus_vdev.so.25.0 00:02:06.317 [538/718] Compiling C object drivers/librte_bus_pci.so.25.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:06.317 [539/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:06.317 [540/718] Generating symbol file drivers/librte_bus_vdev.so.25.0.p/librte_bus_vdev.so.25.0.symbols 00:02:06.317 [541/718] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:06.317 [542/718] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:06.576 [543/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:06.576 [544/718] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:06.576 [545/718] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:06.576 [546/718] Linking static target drivers/librte_mempool_ring.a 00:02:06.576 [547/718] Compiling C object drivers/librte_mempool_ring.so.25.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:06.576 [548/718] Linking target drivers/librte_mempool_ring.so.25.0 00:02:06.576 [549/718] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.576 [550/718] Linking target drivers/librte_bus_pci.so.25.0 00:02:06.834 [551/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:06.834 [552/718] Generating symbol file drivers/librte_bus_pci.so.25.0.p/librte_bus_pci.so.25.0.symbols 00:02:07.092 [553/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:07.092 [554/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:07.092 [555/718] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:07.658 [556/718] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:07.658 [557/718] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:07.658 [558/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:07.920 [559/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:07.920 [560/718] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:07.920 [561/718] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:07.920 [562/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:08.178 [563/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:08.178 [564/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:08.179 [565/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:08.179 [566/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:08.179 [567/718] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:02:08.436 [568/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:08.436 [569/718] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:08.694 [570/718] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:08.694 [571/718] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:08.694 [572/718] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:08.953 [573/718] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:08.953 [574/718] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:08.953 [575/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:08.953 [576/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:08.953 [577/718] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:09.212 [578/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:09.212 [579/718] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:09.212 [580/718] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:09.212 [581/718] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:02:09.470 [582/718] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:09.470 [583/718] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:09.470 [584/718] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:09.470 [585/718] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:09.470 [586/718] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:09.470 [587/718] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:09.470 [588/718] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:09.727 [589/718] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:09.727 [590/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:09.727 [591/718] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:09.727 [592/718] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:09.986 [593/718] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:09.986 [594/718] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:09.986 [595/718] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:09.986 [596/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:09.986 [597/718] Compiling C object drivers/librte_net_i40e.so.25.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:09.986 [598/718] Linking static target drivers/librte_net_i40e.a 00:02:09.986 [599/718] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:10.243 [600/718] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:10.244 [601/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:10.244 [602/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:10.502 [603/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:10.502 [604/718] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.502 [605/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:10.502 [606/718] Linking target drivers/librte_net_i40e.so.25.0 00:02:10.502 [607/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:10.760 [608/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:10.760 [609/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:10.760 [610/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:11.018 [611/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:11.018 [612/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:11.018 [613/718] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:11.018 [614/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:11.018 [615/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:11.018 [616/718] Linking static target lib/librte_vhost.a 00:02:11.018 [617/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:11.277 [618/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:11.277 [619/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:11.277 [620/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:11.277 [621/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:11.277 [622/718] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:11.535 [623/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:11.535 [624/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:11.794 [625/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:11.794 [626/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:11.794 [627/718] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:11.794 [628/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:11.794 [629/718] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.053 [630/718] Linking target lib/librte_vhost.so.25.0 00:02:12.311 [631/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:12.311 [632/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:12.312 [633/718] Linking static target lib/librte_pipeline.a 00:02:12.312 [634/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:12.312 [635/718] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:12.312 [636/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:12.312 [637/718] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:12.570 [638/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:12.570 [639/718] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:12.570 [640/718] Linking target app/dpdk-dumpcap 00:02:12.829 [641/718] Linking target app/dpdk-proc-info 00:02:12.829 [642/718] Linking target app/dpdk-graph 00:02:12.829 [643/718] Linking target app/dpdk-pdump 00:02:12.829 [644/718] Linking target app/dpdk-test-acl 00:02:12.829 [645/718] Linking target app/dpdk-test-cmdline 00:02:12.829 [646/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:12.829 [647/718] Linking target app/dpdk-test-compress-perf 00:02:13.087 [648/718] Linking target app/dpdk-test-crypto-perf 00:02:13.087 [649/718] Linking target app/dpdk-test-dma-perf 00:02:13.087 [650/718] Linking target app/dpdk-test-fib 00:02:13.087 [651/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:13.087 [652/718] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:13.087 [653/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:13.087 [654/718] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:13.087 [655/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:13.346 [656/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:13.346 [657/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:13.346 [658/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:13.346 [659/718] Linking target app/dpdk-test-gpudev 00:02:13.346 [660/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:13.604 [661/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:13.604 [662/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:13.604 [663/718] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:13.604 [664/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:13.862 [665/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:13.862 [666/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:13.862 [667/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:13.862 [668/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:13.862 [669/718] Linking target app/dpdk-test-flow-perf 00:02:13.862 [670/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:13.862 [671/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:14.124 [672/718] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:14.124 [673/718] Linking target app/dpdk-test-eventdev 00:02:14.124 [674/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:14.124 [675/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:14.124 [676/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:14.124 [677/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:14.427 [678/718] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:14.427 [679/718] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.427 [680/718] Linking target app/dpdk-test-bbdev 00:02:14.427 [681/718] Linking target lib/librte_pipeline.so.25.0 00:02:14.427 [682/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:14.427 [683/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:14.709 [684/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:14.709 [685/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:14.709 [686/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:14.709 [687/718] Linking target app/dpdk-test-pipeline 00:02:14.709 [688/718] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:14.967 [689/718] Linking target app/dpdk-test-mldev 00:02:14.967 [690/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:14.967 [691/718] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:14.967 [692/718] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:15.225 [693/718] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:15.225 [694/718] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:15.225 [695/718] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:15.225 [696/718] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:15.483 [697/718] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:15.483 [698/718] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:15.483 [699/718] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:15.483 [700/718] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:15.741 [701/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:15.741 [702/718] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:15.741 [703/718] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:15.999 [704/718] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:15.999 [705/718] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:15.999 [706/718] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:15.999 [707/718] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:16.257 [708/718] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:16.257 [709/718] Linking target app/dpdk-test-sad 00:02:16.258 [710/718] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:16.515 [711/718] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:16.515 [712/718] Linking target app/dpdk-test-regex 00:02:16.515 [713/718] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:02:16.515 [714/718] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:16.775 [715/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:16.775 [716/718] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:16.775 [717/718] Linking target app/dpdk-test-security-perf 00:02:17.035 [718/718] Linking target app/dpdk-testpmd 00:02:17.035 10:33:37 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:02:17.035 10:33:37 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:17.035 10:33:37 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:17.294 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:17.294 [0/1] Installing files. 00:02:17.294 Installing subdir /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/counters.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/cpu.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/memory.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:02:17.294 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.294 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:17.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:17.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:17.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:17.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:17.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:17.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:17.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:17.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:17.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:17.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:17.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.295 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:17.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.558 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.559 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:17.560 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:17.561 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:17.561 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.561 Installing lib/librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.561 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.561 Installing lib/librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.561 Installing lib/librte_argparse.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.561 Installing lib/librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.561 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.561 Installing lib/librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.561 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.561 Installing lib/librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.561 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.561 Installing lib/librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.561 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.561 Installing lib/librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.561 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.561 Installing lib/librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.561 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.561 Installing lib/librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.561 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.561 Installing lib/librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.561 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.561 Installing lib/librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.561 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.561 Installing lib/librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.561 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.561 Installing lib/librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.562 Installing lib/librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.824 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.824 Installing lib/librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.824 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.824 Installing lib/librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.824 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.824 Installing lib/librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.824 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.824 Installing lib/librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.824 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.824 Installing lib/librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.824 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.824 Installing lib/librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.824 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.824 Installing lib/librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.824 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.824 Installing lib/librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.824 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.824 Installing lib/librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.824 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.824 Installing drivers/librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:02:17.824 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.824 Installing drivers/librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:02:17.824 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.824 Installing drivers/librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:02:17.824 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:17.824 Installing drivers/librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:02:17.824 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:17.824 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:17.824 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:17.824 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:17.824 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:17.824 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:17.824 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:17.824 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:17.824 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:17.824 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:17.824 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:17.824 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:17.824 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:17.824 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:17.824 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:17.824 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:17.824 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:17.824 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:17.824 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:17.824 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/argparse/rte_argparse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.824 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/ptr_compress/rte_ptr_compress.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.825 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.826 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry-exporter.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:17.827 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:17.827 Installing symlink pointing to librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.25 00:02:17.827 Installing symlink pointing to librte_log.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:02:17.827 Installing symlink pointing to librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.25 00:02:17.827 Installing symlink pointing to librte_kvargs.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:17.827 Installing symlink pointing to librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so.25 00:02:17.827 Installing symlink pointing to librte_argparse.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so 00:02:17.827 Installing symlink pointing to librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.25 00:02:17.827 Installing symlink pointing to librte_telemetry.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:17.827 Installing symlink pointing to librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.25 00:02:17.827 Installing symlink pointing to librte_eal.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:17.827 Installing symlink pointing to librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.25 00:02:17.827 Installing symlink pointing to librte_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:17.827 Installing symlink pointing to librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.25 00:02:17.827 Installing symlink pointing to librte_rcu.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:17.827 Installing symlink pointing to librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.25 00:02:17.827 Installing symlink pointing to librte_mempool.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:17.827 Installing symlink pointing to librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.25 00:02:17.827 Installing symlink pointing to librte_mbuf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:17.827 Installing symlink pointing to librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.25 00:02:17.827 Installing symlink pointing to librte_net.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:17.827 Installing symlink pointing to librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.25 00:02:17.827 Installing symlink pointing to librte_meter.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:17.827 Installing symlink pointing to librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.25 00:02:17.827 Installing symlink pointing to librte_ethdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:17.827 Installing symlink pointing to librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.25 00:02:17.827 Installing symlink pointing to librte_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:17.827 Installing symlink pointing to librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.25 00:02:17.827 Installing symlink pointing to librte_cmdline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:17.827 Installing symlink pointing to librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.25 00:02:17.827 Installing symlink pointing to librte_metrics.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:17.827 Installing symlink pointing to librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.25 00:02:17.827 Installing symlink pointing to librte_hash.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:17.827 Installing symlink pointing to librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.25 00:02:17.827 Installing symlink pointing to librte_timer.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:17.827 Installing symlink pointing to librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.25 00:02:17.827 Installing symlink pointing to librte_acl.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:17.827 Installing symlink pointing to librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.25 00:02:17.827 Installing symlink pointing to librte_bbdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:17.827 Installing symlink pointing to librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.25 00:02:17.827 Installing symlink pointing to librte_bitratestats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:17.827 Installing symlink pointing to librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.25 00:02:17.827 Installing symlink pointing to librte_bpf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:17.827 Installing symlink pointing to librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.25 00:02:17.827 Installing symlink pointing to librte_cfgfile.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:17.827 Installing symlink pointing to librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.25 00:02:17.827 Installing symlink pointing to librte_compressdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:17.827 Installing symlink pointing to librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.25 00:02:17.827 Installing symlink pointing to librte_cryptodev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:17.827 Installing symlink pointing to librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.25 00:02:17.827 Installing symlink pointing to librte_distributor.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:17.827 Installing symlink pointing to librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.25 00:02:17.827 Installing symlink pointing to librte_dmadev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:17.827 Installing symlink pointing to librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.25 00:02:17.827 Installing symlink pointing to librte_efd.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:17.827 Installing symlink pointing to librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.25 00:02:17.827 Installing symlink pointing to librte_eventdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:17.827 Installing symlink pointing to librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.25 00:02:17.827 Installing symlink pointing to librte_dispatcher.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:02:17.827 Installing symlink pointing to librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.25 00:02:17.827 Installing symlink pointing to librte_gpudev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:17.827 Installing symlink pointing to librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.25 00:02:17.827 Installing symlink pointing to librte_gro.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:17.827 Installing symlink pointing to librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.25 00:02:17.828 Installing symlink pointing to librte_gso.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:17.828 Installing symlink pointing to librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.25 00:02:17.828 Installing symlink pointing to librte_ip_frag.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:17.828 Installing symlink pointing to librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.25 00:02:17.828 Installing symlink pointing to librte_jobstats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:17.828 Installing symlink pointing to librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.25 00:02:17.828 Installing symlink pointing to librte_latencystats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:17.828 Installing symlink pointing to librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.25 00:02:17.828 Installing symlink pointing to librte_lpm.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:17.828 Installing symlink pointing to librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.25 00:02:17.828 Installing symlink pointing to librte_member.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:17.828 Installing symlink pointing to librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.25 00:02:17.828 './librte_bus_pci.so' -> 'dpdk/pmds-25.0/librte_bus_pci.so' 00:02:17.828 './librte_bus_pci.so.25' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25' 00:02:17.828 './librte_bus_pci.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25.0' 00:02:17.828 './librte_bus_vdev.so' -> 'dpdk/pmds-25.0/librte_bus_vdev.so' 00:02:17.828 './librte_bus_vdev.so.25' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25' 00:02:17.828 './librte_bus_vdev.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25.0' 00:02:17.828 './librte_mempool_ring.so' -> 'dpdk/pmds-25.0/librte_mempool_ring.so' 00:02:17.828 './librte_mempool_ring.so.25' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25' 00:02:17.828 './librte_mempool_ring.so.25.0' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25.0' 00:02:17.828 './librte_net_i40e.so' -> 'dpdk/pmds-25.0/librte_net_i40e.so' 00:02:17.828 './librte_net_i40e.so.25' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25' 00:02:17.828 './librte_net_i40e.so.25.0' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25.0' 00:02:17.828 Installing symlink pointing to librte_pcapng.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:17.828 Installing symlink pointing to librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.25 00:02:17.828 Installing symlink pointing to librte_power.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:17.828 Installing symlink pointing to librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.25 00:02:17.828 Installing symlink pointing to librte_rawdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:17.828 Installing symlink pointing to librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.25 00:02:17.828 Installing symlink pointing to librte_regexdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:17.828 Installing symlink pointing to librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.25 00:02:17.828 Installing symlink pointing to librte_mldev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:02:17.828 Installing symlink pointing to librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.25 00:02:17.828 Installing symlink pointing to librte_rib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:17.828 Installing symlink pointing to librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.25 00:02:17.828 Installing symlink pointing to librte_reorder.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:17.828 Installing symlink pointing to librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.25 00:02:17.828 Installing symlink pointing to librte_sched.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:17.828 Installing symlink pointing to librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.25 00:02:17.828 Installing symlink pointing to librte_security.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:17.828 Installing symlink pointing to librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.25 00:02:17.828 Installing symlink pointing to librte_stack.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:17.828 Installing symlink pointing to librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.25 00:02:17.828 Installing symlink pointing to librte_vhost.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:17.828 Installing symlink pointing to librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.25 00:02:17.828 Installing symlink pointing to librte_ipsec.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:17.828 Installing symlink pointing to librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.25 00:02:17.828 Installing symlink pointing to librte_pdcp.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:02:17.828 Installing symlink pointing to librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.25 00:02:17.828 Installing symlink pointing to librte_fib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:17.828 Installing symlink pointing to librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.25 00:02:17.828 Installing symlink pointing to librte_port.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:17.828 Installing symlink pointing to librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.25 00:02:17.828 Installing symlink pointing to librte_pdump.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:17.828 Installing symlink pointing to librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.25 00:02:17.828 Installing symlink pointing to librte_table.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:17.828 Installing symlink pointing to librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.25 00:02:17.828 Installing symlink pointing to librte_pipeline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:17.828 Installing symlink pointing to librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.25 00:02:17.828 Installing symlink pointing to librte_graph.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:17.828 Installing symlink pointing to librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.25 00:02:17.828 Installing symlink pointing to librte_node.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:17.828 Installing symlink pointing to librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25 00:02:17.828 Installing symlink pointing to librte_bus_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:02:17.828 Installing symlink pointing to librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25 00:02:17.828 Installing symlink pointing to librte_bus_vdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:02:17.828 Installing symlink pointing to librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25 00:02:17.828 Installing symlink pointing to librte_mempool_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:02:17.828 Installing symlink pointing to librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25 00:02:17.828 Installing symlink pointing to librte_net_i40e.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:02:17.828 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-25.0' 00:02:18.087 10:33:38 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:02:18.087 10:33:38 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:18.087 00:02:18.087 real 0m37.901s 00:02:18.087 user 4m19.656s 00:02:18.087 sys 0m40.047s 00:02:18.087 10:33:38 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:18.087 ************************************ 00:02:18.087 END TEST build_native_dpdk 00:02:18.087 ************************************ 00:02:18.087 10:33:38 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:18.087 10:33:38 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:18.087 10:33:38 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:18.087 10:33:38 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:18.087 10:33:38 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:18.087 10:33:38 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:18.087 10:33:38 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:18.087 10:33:38 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:18.087 10:33:38 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:02:18.087 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:18.087 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:18.087 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:18.087 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:18.654 Using 'verbs' RDMA provider 00:02:29.583 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:41.826 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:41.826 Creating mk/config.mk...done. 00:02:41.826 Creating mk/cc.flags.mk...done. 00:02:41.826 Type 'make' to build. 00:02:41.826 10:34:01 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:41.826 10:34:01 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:41.826 10:34:01 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:41.826 10:34:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:41.826 ************************************ 00:02:41.827 START TEST make 00:02:41.827 ************************************ 00:02:41.827 10:34:01 make -- common/autotest_common.sh@1125 -- $ make -j10 00:02:41.827 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:41.827 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:41.827 meson setup builddir \ 00:02:41.827 -Dwith-libaio=enabled \ 00:02:41.827 -Dwith-liburing=enabled \ 00:02:41.827 -Dwith-libvfn=disabled \ 00:02:41.827 -Dwith-spdk=false && \ 00:02:41.827 meson compile -C builddir && \ 00:02:41.827 cd -) 00:02:41.827 make[1]: Nothing to be done for 'all'. 00:02:44.369 The Meson build system 00:02:44.369 Version: 1.5.0 00:02:44.369 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:44.369 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:44.369 Build type: native build 00:02:44.369 Project name: xnvme 00:02:44.369 Project version: 0.7.3 00:02:44.369 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:44.369 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:44.369 Host machine cpu family: x86_64 00:02:44.369 Host machine cpu: x86_64 00:02:44.369 Message: host_machine.system: linux 00:02:44.369 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:44.369 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:44.369 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:44.369 Run-time dependency threads found: YES 00:02:44.369 Has header "setupapi.h" : NO 00:02:44.369 Has header "linux/blkzoned.h" : YES 00:02:44.369 Has header "linux/blkzoned.h" : YES (cached) 00:02:44.369 Has header "libaio.h" : YES 00:02:44.369 Library aio found: YES 00:02:44.369 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:44.369 Run-time dependency liburing found: YES 2.2 00:02:44.369 Dependency libvfn skipped: feature with-libvfn disabled 00:02:44.369 Run-time dependency appleframeworks found: NO (tried framework) 00:02:44.369 Run-time dependency appleframeworks found: NO (tried framework) 00:02:44.369 Configuring xnvme_config.h using configuration 00:02:44.369 Configuring xnvme.spec using configuration 00:02:44.369 Run-time dependency bash-completion found: YES 2.11 00:02:44.369 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:44.369 Program cp found: YES (/usr/bin/cp) 00:02:44.369 Has header "winsock2.h" : NO 00:02:44.369 Has header "dbghelp.h" : NO 00:02:44.369 Library rpcrt4 found: NO 00:02:44.369 Library rt found: YES 00:02:44.369 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:44.369 Found CMake: /usr/bin/cmake (3.27.7) 00:02:44.369 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:02:44.369 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:02:44.369 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:02:44.369 Build targets in project: 32 00:02:44.369 00:02:44.369 xnvme 0.7.3 00:02:44.369 00:02:44.369 User defined options 00:02:44.369 with-libaio : enabled 00:02:44.369 with-liburing: enabled 00:02:44.369 with-libvfn : disabled 00:02:44.369 with-spdk : false 00:02:44.369 00:02:44.369 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:44.369 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:44.369 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:02:44.369 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:02:44.369 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:02:44.369 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:02:44.369 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:02:44.370 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:02:44.370 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:02:44.370 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:02:44.370 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:02:44.370 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:02:44.370 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:02:44.370 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:02:44.370 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:02:44.370 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:02:44.370 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:02:44.628 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:02:44.628 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:02:44.628 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:02:44.628 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:02:44.628 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:02:44.628 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:02:44.628 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:02:44.628 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:02:44.628 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:02:44.628 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:02:44.628 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:02:44.628 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:02:44.628 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:02:44.628 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:02:44.628 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:02:44.628 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:02:44.628 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:02:44.628 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:02:44.628 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:02:44.628 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:02:44.628 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:02:44.628 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:02:44.628 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:02:44.628 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:02:44.628 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:02:44.628 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:02:44.628 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:02:44.628 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:02:44.628 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:02:44.628 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:02:44.628 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:02:44.628 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:02:44.628 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:02:44.628 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:02:44.628 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:02:44.628 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:02:44.628 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:02:44.886 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:02:44.887 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:02:44.887 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:02:44.887 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:02:44.887 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:02:44.887 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:02:44.887 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:02:44.887 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:02:44.887 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:02:44.887 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:02:44.887 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:02:44.887 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:02:44.887 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:02:44.887 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:02:44.887 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:02:44.887 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:02:44.887 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:02:44.887 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:02:44.887 [71/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:02:44.887 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:02:44.887 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:02:44.887 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:02:45.145 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:02:45.145 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:02:45.145 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:02:45.145 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:02:45.145 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:02:45.145 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:02:45.145 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:02:45.145 [82/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:02:45.145 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:02:45.145 [84/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:02:45.145 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:02:45.145 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:02:45.145 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:02:45.145 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:02:45.145 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:02:45.145 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:02:45.145 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:02:45.145 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:02:45.145 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:02:45.145 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:02:45.403 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:02:45.403 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:02:45.403 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:02:45.403 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:02:45.403 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:02:45.403 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:02:45.403 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:02:45.403 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:02:45.403 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:02:45.403 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:02:45.403 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:02:45.403 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:02:45.403 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:02:45.403 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:02:45.403 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:02:45.403 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:02:45.403 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:02:45.403 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:02:45.403 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:02:45.403 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:02:45.403 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:02:45.403 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:02:45.403 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:02:45.403 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:02:45.403 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:02:45.403 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:02:45.403 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:02:45.403 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:02:45.403 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:02:45.403 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:02:45.403 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:02:45.403 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:02:45.403 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:02:45.403 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:02:45.403 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:02:45.403 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:02:45.403 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:02:45.403 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:02:45.403 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:02:45.669 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:02:45.669 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:02:45.669 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:02:45.669 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:02:45.669 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:02:45.669 [139/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:02:45.669 [140/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:02:45.669 [141/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:02:45.669 [142/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:02:45.669 [143/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:02:45.669 [144/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:02:45.669 [145/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:02:45.669 [146/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:02:45.669 [147/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:02:45.670 [148/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:02:45.670 [149/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:02:45.948 [150/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:02:45.948 [151/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:02:45.948 [152/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:02:45.948 [153/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:02:45.948 [154/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:02:45.948 [155/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:02:45.948 [156/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:02:45.948 [157/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:02:45.948 [158/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:02:45.948 [159/203] Linking target lib/libxnvme.so 00:02:45.948 [160/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:02:45.948 [161/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:02:45.948 [162/203] Compiling C object tools/xdd.p/xdd.c.o 00:02:45.948 [163/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:02:45.948 [164/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:02:45.948 [165/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:02:45.948 [166/203] Compiling C object tools/lblk.p/lblk.c.o 00:02:45.948 [167/203] Compiling C object tools/kvs.p/kvs.c.o 00:02:45.948 [168/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:02:45.948 [169/203] Compiling C object tools/zoned.p/zoned.c.o 00:02:45.948 [170/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:02:46.206 [171/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:02:46.206 [172/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:02:46.206 [173/203] Linking static target lib/libxnvme.a 00:02:46.206 [174/203] Linking target tests/xnvme_tests_cli 00:02:46.206 [175/203] Linking target tests/xnvme_tests_async_intf 00:02:46.206 [176/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:02:46.206 [177/203] Linking target tests/xnvme_tests_lblk 00:02:46.206 [178/203] Linking target tests/xnvme_tests_buf 00:02:46.206 [179/203] Linking target tests/xnvme_tests_enum 00:02:46.206 [180/203] Linking target tests/xnvme_tests_scc 00:02:46.206 [181/203] Linking target tests/xnvme_tests_znd_append 00:02:46.206 [182/203] Linking target tests/xnvme_tests_ioworker 00:02:46.206 [183/203] Linking target tests/xnvme_tests_xnvme_file 00:02:46.206 [184/203] Linking target tests/xnvme_tests_kvs 00:02:46.206 [185/203] Linking target tests/xnvme_tests_xnvme_cli 00:02:46.206 [186/203] Linking target tests/xnvme_tests_znd_explicit_open 00:02:46.206 [187/203] Linking target tests/xnvme_tests_znd_state 00:02:46.206 [188/203] Linking target tests/xnvme_tests_znd_zrwa 00:02:46.206 [189/203] Linking target tests/xnvme_tests_map 00:02:46.206 [190/203] Linking target tools/zoned 00:02:46.206 [191/203] Linking target examples/xnvme_dev 00:02:46.206 [192/203] Linking target tools/lblk 00:02:46.206 [193/203] Linking target examples/xnvme_enum 00:02:46.206 [194/203] Linking target tools/xdd 00:02:46.206 [195/203] Linking target tools/xnvme_file 00:02:46.206 [196/203] Linking target tools/xnvme 00:02:46.206 [197/203] Linking target tools/kvs 00:02:46.206 [198/203] Linking target examples/xnvme_single_async 00:02:46.206 [199/203] Linking target examples/xnvme_hello 00:02:46.206 [200/203] Linking target examples/zoned_io_async 00:02:46.206 [201/203] Linking target examples/xnvme_io_async 00:02:46.206 [202/203] Linking target examples/zoned_io_sync 00:02:46.206 [203/203] Linking target examples/xnvme_single_sync 00:02:46.206 INFO: autodetecting backend as ninja 00:02:46.206 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:46.206 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:18.275 CC lib/ut/ut.o 00:03:18.275 CC lib/log/log_deprecated.o 00:03:18.275 CC lib/log/log.o 00:03:18.275 CC lib/log/log_flags.o 00:03:18.275 CC lib/ut_mock/mock.o 00:03:18.275 LIB libspdk_ut.a 00:03:18.275 LIB libspdk_log.a 00:03:18.275 SO libspdk_ut.so.2.0 00:03:18.275 LIB libspdk_ut_mock.a 00:03:18.275 SO libspdk_log.so.7.0 00:03:18.275 SO libspdk_ut_mock.so.6.0 00:03:18.275 SYMLINK libspdk_ut.so 00:03:18.275 SYMLINK libspdk_log.so 00:03:18.275 SYMLINK libspdk_ut_mock.so 00:03:18.275 CC lib/dma/dma.o 00:03:18.275 CC lib/util/base64.o 00:03:18.275 CC lib/util/crc16.o 00:03:18.275 CC lib/util/cpuset.o 00:03:18.275 CC lib/ioat/ioat.o 00:03:18.275 CC lib/util/crc32.o 00:03:18.275 CC lib/util/crc32c.o 00:03:18.275 CC lib/util/bit_array.o 00:03:18.275 CXX lib/trace_parser/trace.o 00:03:18.275 CC lib/vfio_user/host/vfio_user_pci.o 00:03:18.275 CC lib/util/crc32_ieee.o 00:03:18.275 CC lib/util/crc64.o 00:03:18.275 CC lib/util/dif.o 00:03:18.275 CC lib/util/fd.o 00:03:18.275 LIB libspdk_dma.a 00:03:18.275 CC lib/util/fd_group.o 00:03:18.275 LIB libspdk_ioat.a 00:03:18.275 SO libspdk_dma.so.5.0 00:03:18.275 CC lib/util/file.o 00:03:18.275 CC lib/util/hexlify.o 00:03:18.275 CC lib/vfio_user/host/vfio_user.o 00:03:18.275 SO libspdk_ioat.so.7.0 00:03:18.275 CC lib/util/iov.o 00:03:18.275 SYMLINK libspdk_dma.so 00:03:18.275 CC lib/util/math.o 00:03:18.275 SYMLINK libspdk_ioat.so 00:03:18.275 CC lib/util/net.o 00:03:18.275 CC lib/util/pipe.o 00:03:18.275 CC lib/util/strerror_tls.o 00:03:18.275 CC lib/util/string.o 00:03:18.275 CC lib/util/uuid.o 00:03:18.275 CC lib/util/xor.o 00:03:18.275 CC lib/util/zipf.o 00:03:18.275 LIB libspdk_vfio_user.a 00:03:18.275 CC lib/util/md5.o 00:03:18.275 SO libspdk_vfio_user.so.5.0 00:03:18.275 SYMLINK libspdk_vfio_user.so 00:03:18.275 LIB libspdk_util.a 00:03:18.275 SO libspdk_util.so.10.0 00:03:18.275 LIB libspdk_trace_parser.a 00:03:18.275 SYMLINK libspdk_util.so 00:03:18.275 SO libspdk_trace_parser.so.6.0 00:03:18.275 SYMLINK libspdk_trace_parser.so 00:03:18.275 CC lib/rdma_utils/rdma_utils.o 00:03:18.275 CC lib/json/json_parse.o 00:03:18.275 CC lib/json/json_util.o 00:03:18.275 CC lib/json/json_write.o 00:03:18.275 CC lib/rdma_provider/common.o 00:03:18.275 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:18.275 CC lib/env_dpdk/env.o 00:03:18.275 CC lib/vmd/vmd.o 00:03:18.275 CC lib/conf/conf.o 00:03:18.275 CC lib/idxd/idxd.o 00:03:18.275 CC lib/idxd/idxd_user.o 00:03:18.275 CC lib/vmd/led.o 00:03:18.275 LIB libspdk_rdma_provider.a 00:03:18.275 SO libspdk_rdma_provider.so.6.0 00:03:18.275 LIB libspdk_conf.a 00:03:18.275 CC lib/env_dpdk/memory.o 00:03:18.275 SO libspdk_conf.so.6.0 00:03:18.275 LIB libspdk_rdma_utils.a 00:03:18.275 SYMLINK libspdk_rdma_provider.so 00:03:18.275 CC lib/env_dpdk/pci.o 00:03:18.275 SO libspdk_rdma_utils.so.1.0 00:03:18.275 LIB libspdk_json.a 00:03:18.275 CC lib/idxd/idxd_kernel.o 00:03:18.275 SYMLINK libspdk_conf.so 00:03:18.275 CC lib/env_dpdk/init.o 00:03:18.275 SO libspdk_json.so.6.0 00:03:18.275 SYMLINK libspdk_rdma_utils.so 00:03:18.275 CC lib/env_dpdk/threads.o 00:03:18.275 SYMLINK libspdk_json.so 00:03:18.275 CC lib/env_dpdk/pci_ioat.o 00:03:18.275 CC lib/env_dpdk/pci_virtio.o 00:03:18.275 CC lib/env_dpdk/pci_vmd.o 00:03:18.275 CC lib/env_dpdk/pci_idxd.o 00:03:18.275 CC lib/env_dpdk/pci_event.o 00:03:18.275 CC lib/env_dpdk/sigbus_handler.o 00:03:18.275 CC lib/env_dpdk/pci_dpdk.o 00:03:18.275 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:18.275 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:18.275 LIB libspdk_idxd.a 00:03:18.275 LIB libspdk_vmd.a 00:03:18.275 SO libspdk_idxd.so.12.1 00:03:18.275 SO libspdk_vmd.so.6.0 00:03:18.275 SYMLINK libspdk_idxd.so 00:03:18.275 SYMLINK libspdk_vmd.so 00:03:18.275 CC lib/jsonrpc/jsonrpc_server.o 00:03:18.275 CC lib/jsonrpc/jsonrpc_client.o 00:03:18.275 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:18.275 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:18.275 LIB libspdk_jsonrpc.a 00:03:18.275 SO libspdk_jsonrpc.so.6.0 00:03:18.275 SYMLINK libspdk_jsonrpc.so 00:03:18.275 CC lib/rpc/rpc.o 00:03:18.275 LIB libspdk_env_dpdk.a 00:03:18.275 SO libspdk_env_dpdk.so.15.0 00:03:18.275 LIB libspdk_rpc.a 00:03:18.275 SO libspdk_rpc.so.6.0 00:03:18.275 SYMLINK libspdk_env_dpdk.so 00:03:18.275 SYMLINK libspdk_rpc.so 00:03:18.275 CC lib/trace/trace_flags.o 00:03:18.275 CC lib/trace/trace.o 00:03:18.275 CC lib/trace/trace_rpc.o 00:03:18.275 CC lib/keyring/keyring.o 00:03:18.275 CC lib/keyring/keyring_rpc.o 00:03:18.275 CC lib/notify/notify.o 00:03:18.275 CC lib/notify/notify_rpc.o 00:03:18.275 LIB libspdk_notify.a 00:03:18.275 SO libspdk_notify.so.6.0 00:03:18.275 LIB libspdk_keyring.a 00:03:18.275 SYMLINK libspdk_notify.so 00:03:18.275 LIB libspdk_trace.a 00:03:18.275 SO libspdk_keyring.so.2.0 00:03:18.275 SO libspdk_trace.so.11.0 00:03:18.276 SYMLINK libspdk_keyring.so 00:03:18.276 SYMLINK libspdk_trace.so 00:03:18.276 CC lib/thread/iobuf.o 00:03:18.276 CC lib/sock/sock.o 00:03:18.276 CC lib/sock/sock_rpc.o 00:03:18.276 CC lib/thread/thread.o 00:03:18.534 LIB libspdk_sock.a 00:03:18.534 SO libspdk_sock.so.10.0 00:03:18.534 SYMLINK libspdk_sock.so 00:03:18.792 CC lib/nvme/nvme_ctrlr.o 00:03:18.792 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:18.792 CC lib/nvme/nvme_fabric.o 00:03:18.792 CC lib/nvme/nvme_ns_cmd.o 00:03:18.792 CC lib/nvme/nvme_pcie_common.o 00:03:18.792 CC lib/nvme/nvme_ns.o 00:03:18.792 CC lib/nvme/nvme_pcie.o 00:03:18.792 CC lib/nvme/nvme_qpair.o 00:03:18.792 CC lib/nvme/nvme.o 00:03:19.358 CC lib/nvme/nvme_quirks.o 00:03:19.358 CC lib/nvme/nvme_transport.o 00:03:19.358 CC lib/nvme/nvme_discovery.o 00:03:19.616 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:19.616 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:19.616 CC lib/nvme/nvme_tcp.o 00:03:19.616 LIB libspdk_thread.a 00:03:19.616 CC lib/nvme/nvme_opal.o 00:03:19.616 SO libspdk_thread.so.10.2 00:03:19.616 CC lib/nvme/nvme_io_msg.o 00:03:19.616 CC lib/nvme/nvme_poll_group.o 00:03:19.616 SYMLINK libspdk_thread.so 00:03:19.874 CC lib/accel/accel.o 00:03:19.874 CC lib/accel/accel_rpc.o 00:03:20.131 CC lib/blob/blobstore.o 00:03:20.131 CC lib/init/json_config.o 00:03:20.131 CC lib/accel/accel_sw.o 00:03:20.131 CC lib/blob/request.o 00:03:20.131 CC lib/blob/zeroes.o 00:03:20.131 CC lib/blob/blob_bs_dev.o 00:03:20.131 CC lib/virtio/virtio.o 00:03:20.389 CC lib/init/subsystem.o 00:03:20.389 CC lib/init/subsystem_rpc.o 00:03:20.389 CC lib/init/rpc.o 00:03:20.389 CC lib/virtio/virtio_vhost_user.o 00:03:20.389 CC lib/nvme/nvme_zns.o 00:03:20.389 CC lib/nvme/nvme_stubs.o 00:03:20.389 CC lib/virtio/virtio_vfio_user.o 00:03:20.389 LIB libspdk_init.a 00:03:20.389 SO libspdk_init.so.6.0 00:03:20.389 CC lib/virtio/virtio_pci.o 00:03:20.646 SYMLINK libspdk_init.so 00:03:20.646 CC lib/nvme/nvme_auth.o 00:03:20.647 CC lib/fsdev/fsdev.o 00:03:20.647 CC lib/event/app.o 00:03:20.647 CC lib/nvme/nvme_cuse.o 00:03:20.904 LIB libspdk_virtio.a 00:03:20.904 CC lib/nvme/nvme_rdma.o 00:03:20.904 SO libspdk_virtio.so.7.0 00:03:20.904 LIB libspdk_accel.a 00:03:20.904 SO libspdk_accel.so.16.0 00:03:20.904 SYMLINK libspdk_virtio.so 00:03:20.904 CC lib/event/reactor.o 00:03:20.904 SYMLINK libspdk_accel.so 00:03:20.904 CC lib/fsdev/fsdev_io.o 00:03:21.162 CC lib/bdev/bdev.o 00:03:21.162 CC lib/bdev/bdev_rpc.o 00:03:21.162 CC lib/fsdev/fsdev_rpc.o 00:03:21.162 CC lib/event/log_rpc.o 00:03:21.162 CC lib/event/app_rpc.o 00:03:21.162 LIB libspdk_fsdev.a 00:03:21.162 SO libspdk_fsdev.so.1.0 00:03:21.423 CC lib/event/scheduler_static.o 00:03:21.423 SYMLINK libspdk_fsdev.so 00:03:21.423 CC lib/bdev/bdev_zone.o 00:03:21.423 CC lib/bdev/part.o 00:03:21.423 CC lib/bdev/scsi_nvme.o 00:03:21.423 LIB libspdk_event.a 00:03:21.423 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:21.423 SO libspdk_event.so.15.0 00:03:21.423 SYMLINK libspdk_event.so 00:03:21.994 LIB libspdk_fuse_dispatcher.a 00:03:21.994 SO libspdk_fuse_dispatcher.so.1.0 00:03:21.994 LIB libspdk_nvme.a 00:03:22.252 SYMLINK libspdk_fuse_dispatcher.so 00:03:22.252 SO libspdk_nvme.so.14.0 00:03:22.510 SYMLINK libspdk_nvme.so 00:03:22.769 LIB libspdk_blob.a 00:03:22.769 SO libspdk_blob.so.11.0 00:03:22.769 SYMLINK libspdk_blob.so 00:03:23.026 CC lib/lvol/lvol.o 00:03:23.026 CC lib/blobfs/tree.o 00:03:23.026 CC lib/blobfs/blobfs.o 00:03:23.592 LIB libspdk_blobfs.a 00:03:23.849 SO libspdk_blobfs.so.10.0 00:03:23.849 LIB libspdk_bdev.a 00:03:23.849 LIB libspdk_lvol.a 00:03:23.849 SYMLINK libspdk_blobfs.so 00:03:23.849 SO libspdk_lvol.so.10.0 00:03:23.849 SO libspdk_bdev.so.17.0 00:03:23.849 SYMLINK libspdk_lvol.so 00:03:23.849 SYMLINK libspdk_bdev.so 00:03:24.107 CC lib/nbd/nbd.o 00:03:24.107 CC lib/nbd/nbd_rpc.o 00:03:24.107 CC lib/scsi/dev.o 00:03:24.107 CC lib/scsi/lun.o 00:03:24.107 CC lib/scsi/port.o 00:03:24.107 CC lib/scsi/scsi.o 00:03:24.107 CC lib/scsi/scsi_bdev.o 00:03:24.107 CC lib/ftl/ftl_core.o 00:03:24.107 CC lib/ublk/ublk.o 00:03:24.107 CC lib/nvmf/ctrlr.o 00:03:24.107 CC lib/ublk/ublk_rpc.o 00:03:24.107 CC lib/scsi/scsi_pr.o 00:03:24.107 CC lib/scsi/scsi_rpc.o 00:03:24.107 CC lib/scsi/task.o 00:03:24.107 CC lib/ftl/ftl_init.o 00:03:24.365 CC lib/ftl/ftl_layout.o 00:03:24.365 CC lib/ftl/ftl_debug.o 00:03:24.365 CC lib/ftl/ftl_io.o 00:03:24.365 LIB libspdk_nbd.a 00:03:24.365 CC lib/ftl/ftl_sb.o 00:03:24.365 SO libspdk_nbd.so.7.0 00:03:24.365 SYMLINK libspdk_nbd.so 00:03:24.365 CC lib/nvmf/ctrlr_discovery.o 00:03:24.365 CC lib/ftl/ftl_l2p.o 00:03:24.365 CC lib/ftl/ftl_l2p_flat.o 00:03:24.365 CC lib/nvmf/ctrlr_bdev.o 00:03:24.365 CC lib/ftl/ftl_nv_cache.o 00:03:24.365 LIB libspdk_scsi.a 00:03:24.365 CC lib/ftl/ftl_band.o 00:03:24.622 CC lib/ftl/ftl_band_ops.o 00:03:24.622 SO libspdk_scsi.so.9.0 00:03:24.622 CC lib/ftl/ftl_writer.o 00:03:24.622 SYMLINK libspdk_scsi.so 00:03:24.622 CC lib/ftl/ftl_rq.o 00:03:24.622 CC lib/ftl/ftl_reloc.o 00:03:24.622 LIB libspdk_ublk.a 00:03:24.622 SO libspdk_ublk.so.3.0 00:03:24.622 SYMLINK libspdk_ublk.so 00:03:24.622 CC lib/ftl/ftl_l2p_cache.o 00:03:24.880 CC lib/ftl/ftl_p2l.o 00:03:24.880 CC lib/ftl/ftl_p2l_log.o 00:03:24.880 CC lib/ftl/mngt/ftl_mngt.o 00:03:24.880 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:24.880 CC lib/iscsi/conn.o 00:03:24.880 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:24.880 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:25.139 CC lib/nvmf/subsystem.o 00:03:25.139 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:25.139 CC lib/iscsi/init_grp.o 00:03:25.139 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:25.139 CC lib/iscsi/iscsi.o 00:03:25.139 CC lib/iscsi/param.o 00:03:25.139 CC lib/vhost/vhost.o 00:03:25.397 CC lib/vhost/vhost_rpc.o 00:03:25.397 CC lib/vhost/vhost_scsi.o 00:03:25.397 CC lib/iscsi/portal_grp.o 00:03:25.397 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:25.397 CC lib/iscsi/tgt_node.o 00:03:25.397 CC lib/nvmf/nvmf.o 00:03:25.655 CC lib/nvmf/nvmf_rpc.o 00:03:25.655 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:25.914 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:25.914 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:25.914 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:25.914 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:25.914 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:25.914 CC lib/vhost/vhost_blk.o 00:03:25.914 CC lib/vhost/rte_vhost_user.o 00:03:25.914 CC lib/iscsi/iscsi_subsystem.o 00:03:25.914 CC lib/iscsi/iscsi_rpc.o 00:03:25.914 CC lib/ftl/utils/ftl_conf.o 00:03:26.172 CC lib/iscsi/task.o 00:03:26.172 CC lib/ftl/utils/ftl_md.o 00:03:26.172 CC lib/nvmf/transport.o 00:03:26.172 CC lib/ftl/utils/ftl_mempool.o 00:03:26.172 CC lib/nvmf/tcp.o 00:03:26.429 CC lib/nvmf/stubs.o 00:03:26.429 CC lib/nvmf/mdns_server.o 00:03:26.429 CC lib/nvmf/rdma.o 00:03:26.429 CC lib/nvmf/auth.o 00:03:26.429 CC lib/ftl/utils/ftl_bitmap.o 00:03:26.429 LIB libspdk_iscsi.a 00:03:26.688 CC lib/ftl/utils/ftl_property.o 00:03:26.688 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:26.688 SO libspdk_iscsi.so.8.0 00:03:26.688 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:26.688 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:26.688 LIB libspdk_vhost.a 00:03:26.688 SYMLINK libspdk_iscsi.so 00:03:26.688 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:26.688 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:26.688 SO libspdk_vhost.so.8.0 00:03:26.688 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:26.688 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:26.946 SYMLINK libspdk_vhost.so 00:03:26.946 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:26.946 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:26.946 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:26.946 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:26.946 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:26.946 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:26.946 CC lib/ftl/base/ftl_base_dev.o 00:03:26.946 CC lib/ftl/base/ftl_base_bdev.o 00:03:26.946 CC lib/ftl/ftl_trace.o 00:03:27.204 LIB libspdk_ftl.a 00:03:27.461 SO libspdk_ftl.so.9.0 00:03:27.461 SYMLINK libspdk_ftl.so 00:03:28.396 LIB libspdk_nvmf.a 00:03:28.396 SO libspdk_nvmf.so.19.0 00:03:28.654 SYMLINK libspdk_nvmf.so 00:03:28.912 CC module/env_dpdk/env_dpdk_rpc.o 00:03:28.912 CC module/blob/bdev/blob_bdev.o 00:03:28.912 CC module/accel/ioat/accel_ioat.o 00:03:28.912 CC module/keyring/linux/keyring.o 00:03:28.912 CC module/accel/dsa/accel_dsa.o 00:03:28.912 CC module/keyring/file/keyring.o 00:03:28.912 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:28.912 CC module/fsdev/aio/fsdev_aio.o 00:03:28.912 CC module/sock/posix/posix.o 00:03:28.912 CC module/accel/error/accel_error.o 00:03:28.912 LIB libspdk_env_dpdk_rpc.a 00:03:28.912 SO libspdk_env_dpdk_rpc.so.6.0 00:03:28.912 CC module/keyring/linux/keyring_rpc.o 00:03:28.912 SYMLINK libspdk_env_dpdk_rpc.so 00:03:28.912 CC module/keyring/file/keyring_rpc.o 00:03:28.912 LIB libspdk_scheduler_dynamic.a 00:03:28.912 CC module/accel/error/accel_error_rpc.o 00:03:28.912 SO libspdk_scheduler_dynamic.so.4.0 00:03:29.169 LIB libspdk_keyring_linux.a 00:03:29.169 CC module/accel/ioat/accel_ioat_rpc.o 00:03:29.169 LIB libspdk_blob_bdev.a 00:03:29.169 SO libspdk_keyring_linux.so.1.0 00:03:29.169 SYMLINK libspdk_scheduler_dynamic.so 00:03:29.169 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:29.169 SO libspdk_blob_bdev.so.11.0 00:03:29.169 LIB libspdk_keyring_file.a 00:03:29.169 LIB libspdk_accel_error.a 00:03:29.169 SO libspdk_keyring_file.so.2.0 00:03:29.169 SYMLINK libspdk_keyring_linux.so 00:03:29.169 SYMLINK libspdk_blob_bdev.so 00:03:29.169 CC module/accel/dsa/accel_dsa_rpc.o 00:03:29.169 SO libspdk_accel_error.so.2.0 00:03:29.169 CC module/fsdev/aio/linux_aio_mgr.o 00:03:29.169 SYMLINK libspdk_keyring_file.so 00:03:29.169 LIB libspdk_accel_ioat.a 00:03:29.169 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:29.169 SYMLINK libspdk_accel_error.so 00:03:29.169 SO libspdk_accel_ioat.so.6.0 00:03:29.170 CC module/scheduler/gscheduler/gscheduler.o 00:03:29.170 SYMLINK libspdk_accel_ioat.so 00:03:29.170 LIB libspdk_accel_dsa.a 00:03:29.170 LIB libspdk_scheduler_dpdk_governor.a 00:03:29.427 SO libspdk_accel_dsa.so.5.0 00:03:29.427 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:29.427 CC module/accel/iaa/accel_iaa.o 00:03:29.427 SYMLINK libspdk_accel_dsa.so 00:03:29.427 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:29.427 CC module/bdev/delay/vbdev_delay.o 00:03:29.427 LIB libspdk_fsdev_aio.a 00:03:29.427 CC module/blobfs/bdev/blobfs_bdev.o 00:03:29.427 LIB libspdk_scheduler_gscheduler.a 00:03:29.427 SO libspdk_fsdev_aio.so.1.0 00:03:29.427 CC module/bdev/error/vbdev_error.o 00:03:29.427 SO libspdk_scheduler_gscheduler.so.4.0 00:03:29.427 CC module/accel/iaa/accel_iaa_rpc.o 00:03:29.427 CC module/bdev/gpt/gpt.o 00:03:29.427 CC module/bdev/lvol/vbdev_lvol.o 00:03:29.427 SYMLINK libspdk_fsdev_aio.so 00:03:29.427 SYMLINK libspdk_scheduler_gscheduler.so 00:03:29.427 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:29.427 CC module/bdev/malloc/bdev_malloc.o 00:03:29.427 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:29.685 LIB libspdk_sock_posix.a 00:03:29.686 LIB libspdk_accel_iaa.a 00:03:29.686 SO libspdk_accel_iaa.so.3.0 00:03:29.686 CC module/bdev/gpt/vbdev_gpt.o 00:03:29.686 SO libspdk_sock_posix.so.6.0 00:03:29.686 CC module/bdev/null/bdev_null.o 00:03:29.686 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:29.686 SYMLINK libspdk_accel_iaa.so 00:03:29.686 SYMLINK libspdk_sock_posix.so 00:03:29.686 LIB libspdk_blobfs_bdev.a 00:03:29.686 CC module/bdev/error/vbdev_error_rpc.o 00:03:29.686 SO libspdk_blobfs_bdev.so.6.0 00:03:29.686 SYMLINK libspdk_blobfs_bdev.so 00:03:29.686 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:29.686 CC module/bdev/nvme/bdev_nvme.o 00:03:29.686 CC module/bdev/passthru/vbdev_passthru.o 00:03:29.686 LIB libspdk_bdev_delay.a 00:03:29.944 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:29.944 LIB libspdk_bdev_gpt.a 00:03:29.944 SO libspdk_bdev_delay.so.6.0 00:03:29.944 LIB libspdk_bdev_error.a 00:03:29.944 SO libspdk_bdev_gpt.so.6.0 00:03:29.944 CC module/bdev/null/bdev_null_rpc.o 00:03:29.944 SO libspdk_bdev_error.so.6.0 00:03:29.944 LIB libspdk_bdev_malloc.a 00:03:29.944 SYMLINK libspdk_bdev_delay.so 00:03:29.944 SYMLINK libspdk_bdev_gpt.so 00:03:29.944 SO libspdk_bdev_malloc.so.6.0 00:03:29.944 SYMLINK libspdk_bdev_error.so 00:03:29.944 SYMLINK libspdk_bdev_malloc.so 00:03:29.944 LIB libspdk_bdev_null.a 00:03:29.944 CC module/bdev/raid/bdev_raid.o 00:03:29.944 LIB libspdk_bdev_lvol.a 00:03:29.944 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:29.944 SO libspdk_bdev_null.so.6.0 00:03:29.944 SO libspdk_bdev_lvol.so.6.0 00:03:29.944 CC module/bdev/xnvme/bdev_xnvme.o 00:03:29.944 CC module/bdev/split/vbdev_split.o 00:03:30.202 CC module/bdev/aio/bdev_aio.o 00:03:30.202 SYMLINK libspdk_bdev_null.so 00:03:30.202 SYMLINK libspdk_bdev_lvol.so 00:03:30.202 CC module/bdev/aio/bdev_aio_rpc.o 00:03:30.202 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:30.202 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:30.202 LIB libspdk_bdev_passthru.a 00:03:30.202 CC module/bdev/split/vbdev_split_rpc.o 00:03:30.202 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:30.202 SO libspdk_bdev_passthru.so.6.0 00:03:30.202 CC module/bdev/raid/bdev_raid_rpc.o 00:03:30.202 SYMLINK libspdk_bdev_passthru.so 00:03:30.485 LIB libspdk_bdev_zone_block.a 00:03:30.485 LIB libspdk_bdev_xnvme.a 00:03:30.485 CC module/bdev/ftl/bdev_ftl.o 00:03:30.485 SO libspdk_bdev_zone_block.so.6.0 00:03:30.485 SO libspdk_bdev_xnvme.so.3.0 00:03:30.485 LIB libspdk_bdev_split.a 00:03:30.485 SO libspdk_bdev_split.so.6.0 00:03:30.485 LIB libspdk_bdev_aio.a 00:03:30.485 SYMLINK libspdk_bdev_xnvme.so 00:03:30.485 SYMLINK libspdk_bdev_zone_block.so 00:03:30.485 CC module/bdev/raid/bdev_raid_sb.o 00:03:30.485 CC module/bdev/iscsi/bdev_iscsi.o 00:03:30.485 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:30.485 SYMLINK libspdk_bdev_split.so 00:03:30.485 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:30.485 CC module/bdev/nvme/nvme_rpc.o 00:03:30.485 SO libspdk_bdev_aio.so.6.0 00:03:30.485 SYMLINK libspdk_bdev_aio.so 00:03:30.485 CC module/bdev/nvme/bdev_mdns_client.o 00:03:30.485 CC module/bdev/nvme/vbdev_opal.o 00:03:30.485 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:30.753 CC module/bdev/raid/raid0.o 00:03:30.753 LIB libspdk_bdev_ftl.a 00:03:30.753 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:30.753 SO libspdk_bdev_ftl.so.6.0 00:03:30.753 CC module/bdev/raid/raid1.o 00:03:30.753 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:30.753 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:30.753 SYMLINK libspdk_bdev_ftl.so 00:03:30.753 CC module/bdev/raid/concat.o 00:03:30.753 LIB libspdk_bdev_iscsi.a 00:03:30.753 SO libspdk_bdev_iscsi.so.6.0 00:03:30.753 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:31.012 SYMLINK libspdk_bdev_iscsi.so 00:03:31.012 LIB libspdk_bdev_virtio.a 00:03:31.012 SO libspdk_bdev_virtio.so.6.0 00:03:31.012 LIB libspdk_bdev_raid.a 00:03:31.013 SYMLINK libspdk_bdev_virtio.so 00:03:31.013 SO libspdk_bdev_raid.so.6.0 00:03:31.272 SYMLINK libspdk_bdev_raid.so 00:03:31.842 LIB libspdk_bdev_nvme.a 00:03:31.842 SO libspdk_bdev_nvme.so.7.0 00:03:32.103 SYMLINK libspdk_bdev_nvme.so 00:03:32.361 CC module/event/subsystems/scheduler/scheduler.o 00:03:32.361 CC module/event/subsystems/keyring/keyring.o 00:03:32.361 CC module/event/subsystems/vmd/vmd.o 00:03:32.361 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:32.361 CC module/event/subsystems/iobuf/iobuf.o 00:03:32.362 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:32.362 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:32.362 CC module/event/subsystems/fsdev/fsdev.o 00:03:32.362 CC module/event/subsystems/sock/sock.o 00:03:32.362 LIB libspdk_event_scheduler.a 00:03:32.362 SO libspdk_event_scheduler.so.4.0 00:03:32.621 LIB libspdk_event_vhost_blk.a 00:03:32.621 LIB libspdk_event_vmd.a 00:03:32.621 LIB libspdk_event_keyring.a 00:03:32.621 LIB libspdk_event_fsdev.a 00:03:32.621 SO libspdk_event_vhost_blk.so.3.0 00:03:32.621 LIB libspdk_event_iobuf.a 00:03:32.621 SO libspdk_event_vmd.so.6.0 00:03:32.621 LIB libspdk_event_sock.a 00:03:32.621 SO libspdk_event_keyring.so.1.0 00:03:32.621 SYMLINK libspdk_event_scheduler.so 00:03:32.621 SO libspdk_event_fsdev.so.1.0 00:03:32.621 SO libspdk_event_sock.so.5.0 00:03:32.621 SO libspdk_event_iobuf.so.3.0 00:03:32.621 SYMLINK libspdk_event_vhost_blk.so 00:03:32.621 SYMLINK libspdk_event_keyring.so 00:03:32.621 SYMLINK libspdk_event_vmd.so 00:03:32.621 SYMLINK libspdk_event_fsdev.so 00:03:32.621 SYMLINK libspdk_event_sock.so 00:03:32.621 SYMLINK libspdk_event_iobuf.so 00:03:32.880 CC module/event/subsystems/accel/accel.o 00:03:32.880 LIB libspdk_event_accel.a 00:03:32.880 SO libspdk_event_accel.so.6.0 00:03:33.138 SYMLINK libspdk_event_accel.so 00:03:33.397 CC module/event/subsystems/bdev/bdev.o 00:03:33.397 LIB libspdk_event_bdev.a 00:03:33.397 SO libspdk_event_bdev.so.6.0 00:03:33.397 SYMLINK libspdk_event_bdev.so 00:03:33.655 CC module/event/subsystems/nbd/nbd.o 00:03:33.655 CC module/event/subsystems/scsi/scsi.o 00:03:33.655 CC module/event/subsystems/ublk/ublk.o 00:03:33.655 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:33.655 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:33.655 LIB libspdk_event_ublk.a 00:03:33.655 LIB libspdk_event_nbd.a 00:03:33.655 LIB libspdk_event_scsi.a 00:03:33.655 SO libspdk_event_ublk.so.3.0 00:03:33.914 SO libspdk_event_nbd.so.6.0 00:03:33.914 SO libspdk_event_scsi.so.6.0 00:03:33.914 SYMLINK libspdk_event_ublk.so 00:03:33.914 SYMLINK libspdk_event_nbd.so 00:03:33.914 SYMLINK libspdk_event_scsi.so 00:03:33.914 LIB libspdk_event_nvmf.a 00:03:33.914 SO libspdk_event_nvmf.so.6.0 00:03:33.914 SYMLINK libspdk_event_nvmf.so 00:03:33.914 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:33.914 CC module/event/subsystems/iscsi/iscsi.o 00:03:34.174 LIB libspdk_event_vhost_scsi.a 00:03:34.174 LIB libspdk_event_iscsi.a 00:03:34.174 SO libspdk_event_vhost_scsi.so.3.0 00:03:34.174 SO libspdk_event_iscsi.so.6.0 00:03:34.174 SYMLINK libspdk_event_vhost_scsi.so 00:03:34.174 SYMLINK libspdk_event_iscsi.so 00:03:34.433 SO libspdk.so.6.0 00:03:34.434 SYMLINK libspdk.so 00:03:34.691 CC app/trace_record/trace_record.o 00:03:34.691 CXX app/trace/trace.o 00:03:34.691 CC app/spdk_lspci/spdk_lspci.o 00:03:34.691 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:34.691 CC app/nvmf_tgt/nvmf_main.o 00:03:34.691 CC app/iscsi_tgt/iscsi_tgt.o 00:03:34.691 CC examples/util/zipf/zipf.o 00:03:34.691 CC examples/ioat/perf/perf.o 00:03:34.691 CC app/spdk_tgt/spdk_tgt.o 00:03:34.691 CC test/thread/poller_perf/poller_perf.o 00:03:34.691 LINK spdk_lspci 00:03:34.691 LINK nvmf_tgt 00:03:34.691 LINK interrupt_tgt 00:03:34.691 LINK zipf 00:03:34.691 LINK iscsi_tgt 00:03:34.691 LINK poller_perf 00:03:34.691 LINK spdk_trace_record 00:03:34.691 LINK ioat_perf 00:03:34.949 LINK spdk_tgt 00:03:34.949 CC app/spdk_nvme_perf/perf.o 00:03:34.949 CC app/spdk_nvme_identify/identify.o 00:03:34.949 LINK spdk_trace 00:03:34.949 CC examples/ioat/verify/verify.o 00:03:34.949 CC app/spdk_nvme_discover/discovery_aer.o 00:03:34.949 CC app/spdk_top/spdk_top.o 00:03:34.949 CC examples/thread/thread/thread_ex.o 00:03:35.207 CC examples/sock/hello_world/hello_sock.o 00:03:35.207 LINK verify 00:03:35.207 CC test/dma/test_dma/test_dma.o 00:03:35.207 CC app/spdk_dd/spdk_dd.o 00:03:35.207 LINK spdk_nvme_discover 00:03:35.207 CC app/fio/nvme/fio_plugin.o 00:03:35.207 LINK thread 00:03:35.465 LINK hello_sock 00:03:35.465 CC test/app/bdev_svc/bdev_svc.o 00:03:35.465 LINK spdk_dd 00:03:35.465 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:35.465 CC app/vhost/vhost.o 00:03:35.465 LINK bdev_svc 00:03:35.465 LINK test_dma 00:03:35.465 CC examples/vmd/lsvmd/lsvmd.o 00:03:35.723 CC examples/vmd/led/led.o 00:03:35.723 LINK vhost 00:03:35.723 LINK spdk_nvme_perf 00:03:35.723 LINK spdk_nvme_identify 00:03:35.723 LINK lsvmd 00:03:35.723 LINK led 00:03:35.723 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:35.723 LINK spdk_nvme 00:03:35.723 LINK spdk_top 00:03:35.723 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:35.723 LINK nvme_fuzz 00:03:35.723 TEST_HEADER include/spdk/accel.h 00:03:35.723 TEST_HEADER include/spdk/accel_module.h 00:03:35.723 TEST_HEADER include/spdk/assert.h 00:03:35.723 TEST_HEADER include/spdk/barrier.h 00:03:35.723 TEST_HEADER include/spdk/base64.h 00:03:35.723 TEST_HEADER include/spdk/bdev.h 00:03:35.723 TEST_HEADER include/spdk/bdev_module.h 00:03:35.723 TEST_HEADER include/spdk/bdev_zone.h 00:03:35.723 TEST_HEADER include/spdk/bit_array.h 00:03:35.982 TEST_HEADER include/spdk/bit_pool.h 00:03:35.982 TEST_HEADER include/spdk/blob_bdev.h 00:03:35.982 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:35.982 TEST_HEADER include/spdk/blobfs.h 00:03:35.982 TEST_HEADER include/spdk/blob.h 00:03:35.982 TEST_HEADER include/spdk/conf.h 00:03:35.982 TEST_HEADER include/spdk/config.h 00:03:35.982 TEST_HEADER include/spdk/cpuset.h 00:03:35.982 TEST_HEADER include/spdk/crc16.h 00:03:35.982 TEST_HEADER include/spdk/crc32.h 00:03:35.982 TEST_HEADER include/spdk/crc64.h 00:03:35.982 TEST_HEADER include/spdk/dif.h 00:03:35.982 TEST_HEADER include/spdk/dma.h 00:03:35.982 TEST_HEADER include/spdk/endian.h 00:03:35.982 TEST_HEADER include/spdk/env_dpdk.h 00:03:35.982 TEST_HEADER include/spdk/env.h 00:03:35.982 TEST_HEADER include/spdk/event.h 00:03:35.982 TEST_HEADER include/spdk/fd_group.h 00:03:35.982 CC test/app/jsoncat/jsoncat.o 00:03:35.982 TEST_HEADER include/spdk/fd.h 00:03:35.982 CC test/app/histogram_perf/histogram_perf.o 00:03:35.982 CC app/fio/bdev/fio_plugin.o 00:03:35.982 TEST_HEADER include/spdk/file.h 00:03:35.982 TEST_HEADER include/spdk/fsdev.h 00:03:35.982 TEST_HEADER include/spdk/fsdev_module.h 00:03:35.982 TEST_HEADER include/spdk/ftl.h 00:03:35.982 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:35.982 TEST_HEADER include/spdk/gpt_spec.h 00:03:35.982 TEST_HEADER include/spdk/hexlify.h 00:03:35.982 TEST_HEADER include/spdk/histogram_data.h 00:03:35.982 TEST_HEADER include/spdk/idxd.h 00:03:35.982 TEST_HEADER include/spdk/idxd_spec.h 00:03:35.982 TEST_HEADER include/spdk/init.h 00:03:35.982 TEST_HEADER include/spdk/ioat.h 00:03:35.982 TEST_HEADER include/spdk/ioat_spec.h 00:03:35.982 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:35.982 TEST_HEADER include/spdk/iscsi_spec.h 00:03:35.982 TEST_HEADER include/spdk/json.h 00:03:35.982 TEST_HEADER include/spdk/jsonrpc.h 00:03:35.982 TEST_HEADER include/spdk/keyring.h 00:03:35.982 TEST_HEADER include/spdk/keyring_module.h 00:03:35.982 TEST_HEADER include/spdk/likely.h 00:03:35.982 TEST_HEADER include/spdk/log.h 00:03:35.982 CC test/app/stub/stub.o 00:03:35.982 TEST_HEADER include/spdk/lvol.h 00:03:35.982 TEST_HEADER include/spdk/md5.h 00:03:35.982 TEST_HEADER include/spdk/memory.h 00:03:35.982 TEST_HEADER include/spdk/mmio.h 00:03:35.982 TEST_HEADER include/spdk/nbd.h 00:03:35.982 TEST_HEADER include/spdk/net.h 00:03:35.982 TEST_HEADER include/spdk/notify.h 00:03:35.982 TEST_HEADER include/spdk/nvme.h 00:03:35.982 TEST_HEADER include/spdk/nvme_intel.h 00:03:35.982 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:35.982 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:35.982 TEST_HEADER include/spdk/nvme_spec.h 00:03:35.982 TEST_HEADER include/spdk/nvme_zns.h 00:03:35.982 CC examples/idxd/perf/perf.o 00:03:35.982 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:35.982 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:35.982 TEST_HEADER include/spdk/nvmf.h 00:03:35.982 TEST_HEADER include/spdk/nvmf_spec.h 00:03:35.982 TEST_HEADER include/spdk/nvmf_transport.h 00:03:35.982 TEST_HEADER include/spdk/opal.h 00:03:35.982 TEST_HEADER include/spdk/opal_spec.h 00:03:35.982 TEST_HEADER include/spdk/pci_ids.h 00:03:35.982 TEST_HEADER include/spdk/pipe.h 00:03:35.982 TEST_HEADER include/spdk/queue.h 00:03:35.982 TEST_HEADER include/spdk/reduce.h 00:03:35.982 TEST_HEADER include/spdk/rpc.h 00:03:35.982 TEST_HEADER include/spdk/scheduler.h 00:03:35.982 TEST_HEADER include/spdk/scsi.h 00:03:35.982 TEST_HEADER include/spdk/scsi_spec.h 00:03:35.982 TEST_HEADER include/spdk/sock.h 00:03:35.982 TEST_HEADER include/spdk/stdinc.h 00:03:35.982 TEST_HEADER include/spdk/string.h 00:03:35.982 TEST_HEADER include/spdk/thread.h 00:03:35.982 TEST_HEADER include/spdk/trace.h 00:03:35.982 TEST_HEADER include/spdk/trace_parser.h 00:03:35.982 TEST_HEADER include/spdk/tree.h 00:03:35.982 TEST_HEADER include/spdk/ublk.h 00:03:35.982 TEST_HEADER include/spdk/util.h 00:03:35.982 TEST_HEADER include/spdk/uuid.h 00:03:35.982 TEST_HEADER include/spdk/version.h 00:03:35.982 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:35.982 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:35.982 TEST_HEADER include/spdk/vhost.h 00:03:35.982 TEST_HEADER include/spdk/vmd.h 00:03:35.982 TEST_HEADER include/spdk/xor.h 00:03:35.982 TEST_HEADER include/spdk/zipf.h 00:03:35.982 CXX test/cpp_headers/accel.o 00:03:35.982 LINK histogram_perf 00:03:35.982 LINK jsoncat 00:03:35.982 CC test/event/event_perf/event_perf.o 00:03:35.982 LINK stub 00:03:36.240 CC test/env/mem_callbacks/mem_callbacks.o 00:03:36.240 CXX test/cpp_headers/accel_module.o 00:03:36.240 LINK event_perf 00:03:36.240 CC test/env/vtophys/vtophys.o 00:03:36.240 LINK vhost_fuzz 00:03:36.240 CXX test/cpp_headers/assert.o 00:03:36.240 LINK idxd_perf 00:03:36.240 CC test/event/reactor/reactor.o 00:03:36.240 CXX test/cpp_headers/barrier.o 00:03:36.240 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:36.240 LINK vtophys 00:03:36.240 CXX test/cpp_headers/base64.o 00:03:36.497 LINK spdk_bdev 00:03:36.497 LINK reactor 00:03:36.497 CXX test/cpp_headers/bdev.o 00:03:36.497 CC test/event/reactor_perf/reactor_perf.o 00:03:36.497 CXX test/cpp_headers/bdev_module.o 00:03:36.497 CC test/event/app_repeat/app_repeat.o 00:03:36.497 CXX test/cpp_headers/bdev_zone.o 00:03:36.497 CXX test/cpp_headers/bit_array.o 00:03:36.497 CC test/event/scheduler/scheduler.o 00:03:36.497 LINK hello_fsdev 00:03:36.497 LINK reactor_perf 00:03:36.755 LINK mem_callbacks 00:03:36.755 LINK app_repeat 00:03:36.755 CXX test/cpp_headers/bit_pool.o 00:03:36.755 CXX test/cpp_headers/blob_bdev.o 00:03:36.755 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:36.755 CC test/rpc_client/rpc_client_test.o 00:03:36.755 LINK scheduler 00:03:36.755 CC test/nvme/aer/aer.o 00:03:36.755 CC test/nvme/reset/reset.o 00:03:36.755 CXX test/cpp_headers/blobfs_bdev.o 00:03:36.755 LINK env_dpdk_post_init 00:03:37.014 CC examples/accel/perf/accel_perf.o 00:03:37.014 LINK rpc_client_test 00:03:37.014 CC examples/blob/hello_world/hello_blob.o 00:03:37.014 CC examples/nvme/hello_world/hello_world.o 00:03:37.014 LINK aer 00:03:37.014 CC test/env/memory/memory_ut.o 00:03:37.014 CXX test/cpp_headers/blobfs.o 00:03:37.014 CC test/accel/dif/dif.o 00:03:37.014 LINK reset 00:03:37.014 LINK hello_blob 00:03:37.274 LINK hello_world 00:03:37.274 LINK iscsi_fuzz 00:03:37.274 CC test/blobfs/mkfs/mkfs.o 00:03:37.274 CC test/nvme/sgl/sgl.o 00:03:37.274 CXX test/cpp_headers/blob.o 00:03:37.274 CC examples/nvme/reconnect/reconnect.o 00:03:37.274 CC test/nvme/e2edp/nvme_dp.o 00:03:37.274 LINK accel_perf 00:03:37.274 LINK mkfs 00:03:37.274 CC examples/blob/cli/blobcli.o 00:03:37.534 CXX test/cpp_headers/conf.o 00:03:37.534 CC test/nvme/overhead/overhead.o 00:03:37.534 LINK sgl 00:03:37.534 CXX test/cpp_headers/config.o 00:03:37.534 CC test/nvme/err_injection/err_injection.o 00:03:37.534 CXX test/cpp_headers/cpuset.o 00:03:37.534 CXX test/cpp_headers/crc16.o 00:03:37.534 LINK nvme_dp 00:03:37.534 CC test/nvme/startup/startup.o 00:03:37.534 LINK reconnect 00:03:37.534 LINK overhead 00:03:37.794 CXX test/cpp_headers/crc32.o 00:03:37.794 LINK err_injection 00:03:37.794 LINK startup 00:03:37.794 CC test/nvme/reserve/reserve.o 00:03:37.794 CC test/nvme/simple_copy/simple_copy.o 00:03:37.794 LINK dif 00:03:37.794 CC test/env/pci/pci_ut.o 00:03:37.794 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:37.794 CXX test/cpp_headers/crc64.o 00:03:37.794 LINK blobcli 00:03:37.794 CXX test/cpp_headers/dif.o 00:03:38.053 CC test/nvme/connect_stress/connect_stress.o 00:03:38.053 CXX test/cpp_headers/dma.o 00:03:38.053 LINK reserve 00:03:38.053 LINK simple_copy 00:03:38.053 CC test/nvme/boot_partition/boot_partition.o 00:03:38.053 CXX test/cpp_headers/endian.o 00:03:38.053 LINK connect_stress 00:03:38.053 CC test/nvme/compliance/nvme_compliance.o 00:03:38.053 CC test/nvme/fused_ordering/fused_ordering.o 00:03:38.053 LINK memory_ut 00:03:38.053 CXX test/cpp_headers/env_dpdk.o 00:03:38.314 LINK pci_ut 00:03:38.314 CXX test/cpp_headers/env.o 00:03:38.314 LINK boot_partition 00:03:38.314 LINK nvme_manage 00:03:38.314 CC examples/bdev/hello_world/hello_bdev.o 00:03:38.314 CXX test/cpp_headers/event.o 00:03:38.314 LINK fused_ordering 00:03:38.314 CC examples/nvme/arbitration/arbitration.o 00:03:38.314 CC examples/nvme/hotplug/hotplug.o 00:03:38.314 CXX test/cpp_headers/fd_group.o 00:03:38.314 LINK nvme_compliance 00:03:38.575 CC test/nvme/fdp/fdp.o 00:03:38.575 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:38.575 CC examples/bdev/bdevperf/bdevperf.o 00:03:38.575 LINK hello_bdev 00:03:38.575 LINK hotplug 00:03:38.575 CC test/bdev/bdevio/bdevio.o 00:03:38.575 CC test/lvol/esnap/esnap.o 00:03:38.575 CXX test/cpp_headers/fd.o 00:03:38.575 LINK arbitration 00:03:38.575 CC test/nvme/cuse/cuse.o 00:03:38.575 LINK doorbell_aers 00:03:38.833 CXX test/cpp_headers/file.o 00:03:38.833 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:38.833 CC examples/nvme/abort/abort.o 00:03:38.833 CXX test/cpp_headers/fsdev.o 00:03:38.833 LINK fdp 00:03:38.833 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:38.833 CXX test/cpp_headers/fsdev_module.o 00:03:38.833 LINK bdevio 00:03:38.833 CXX test/cpp_headers/ftl.o 00:03:39.092 CXX test/cpp_headers/fuse_dispatcher.o 00:03:39.092 LINK cmb_copy 00:03:39.092 LINK pmr_persistence 00:03:39.092 CXX test/cpp_headers/gpt_spec.o 00:03:39.092 CXX test/cpp_headers/hexlify.o 00:03:39.092 CXX test/cpp_headers/histogram_data.o 00:03:39.092 CXX test/cpp_headers/idxd.o 00:03:39.092 CXX test/cpp_headers/idxd_spec.o 00:03:39.092 CXX test/cpp_headers/init.o 00:03:39.092 CXX test/cpp_headers/ioat.o 00:03:39.092 CXX test/cpp_headers/ioat_spec.o 00:03:39.092 LINK abort 00:03:39.350 CXX test/cpp_headers/iscsi_spec.o 00:03:39.350 CXX test/cpp_headers/json.o 00:03:39.350 CXX test/cpp_headers/jsonrpc.o 00:03:39.350 LINK bdevperf 00:03:39.350 CXX test/cpp_headers/keyring.o 00:03:39.350 CXX test/cpp_headers/keyring_module.o 00:03:39.350 CXX test/cpp_headers/likely.o 00:03:39.350 CXX test/cpp_headers/log.o 00:03:39.350 CXX test/cpp_headers/lvol.o 00:03:39.350 CXX test/cpp_headers/md5.o 00:03:39.350 CXX test/cpp_headers/memory.o 00:03:39.350 CXX test/cpp_headers/mmio.o 00:03:39.350 CXX test/cpp_headers/nbd.o 00:03:39.350 CXX test/cpp_headers/net.o 00:03:39.350 CXX test/cpp_headers/notify.o 00:03:39.350 CXX test/cpp_headers/nvme.o 00:03:39.608 CXX test/cpp_headers/nvme_intel.o 00:03:39.608 CXX test/cpp_headers/nvme_ocssd.o 00:03:39.608 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:39.608 CC examples/nvmf/nvmf/nvmf.o 00:03:39.608 CXX test/cpp_headers/nvme_spec.o 00:03:39.608 CXX test/cpp_headers/nvme_zns.o 00:03:39.608 CXX test/cpp_headers/nvmf_cmd.o 00:03:39.608 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:39.608 CXX test/cpp_headers/nvmf.o 00:03:39.608 CXX test/cpp_headers/nvmf_spec.o 00:03:39.608 CXX test/cpp_headers/nvmf_transport.o 00:03:39.608 CXX test/cpp_headers/opal.o 00:03:39.608 CXX test/cpp_headers/opal_spec.o 00:03:39.866 CXX test/cpp_headers/pci_ids.o 00:03:39.866 CXX test/cpp_headers/pipe.o 00:03:39.866 CXX test/cpp_headers/queue.o 00:03:39.866 CXX test/cpp_headers/reduce.o 00:03:39.866 CXX test/cpp_headers/rpc.o 00:03:39.866 CXX test/cpp_headers/scheduler.o 00:03:39.866 LINK cuse 00:03:39.866 CXX test/cpp_headers/scsi.o 00:03:39.866 LINK nvmf 00:03:39.866 CXX test/cpp_headers/scsi_spec.o 00:03:39.866 CXX test/cpp_headers/sock.o 00:03:39.866 CXX test/cpp_headers/stdinc.o 00:03:39.866 CXX test/cpp_headers/string.o 00:03:39.866 CXX test/cpp_headers/thread.o 00:03:39.866 CXX test/cpp_headers/trace.o 00:03:39.866 CXX test/cpp_headers/trace_parser.o 00:03:39.866 CXX test/cpp_headers/tree.o 00:03:39.866 CXX test/cpp_headers/ublk.o 00:03:40.125 CXX test/cpp_headers/util.o 00:03:40.125 CXX test/cpp_headers/uuid.o 00:03:40.125 CXX test/cpp_headers/version.o 00:03:40.125 CXX test/cpp_headers/vfio_user_pci.o 00:03:40.125 CXX test/cpp_headers/vfio_user_spec.o 00:03:40.125 CXX test/cpp_headers/vhost.o 00:03:40.125 CXX test/cpp_headers/vmd.o 00:03:40.125 CXX test/cpp_headers/xor.o 00:03:40.125 CXX test/cpp_headers/zipf.o 00:03:42.657 LINK esnap 00:03:42.918 00:03:42.918 real 1m1.726s 00:03:42.918 user 5m4.381s 00:03:42.918 sys 0m52.413s 00:03:42.918 10:35:03 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:42.918 ************************************ 00:03:42.918 END TEST make 00:03:42.918 ************************************ 00:03:42.918 10:35:03 make -- common/autotest_common.sh@10 -- $ set +x 00:03:43.180 10:35:03 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:43.180 10:35:03 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:43.180 10:35:03 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:43.180 10:35:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.180 10:35:03 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:43.180 10:35:03 -- pm/common@44 -- $ pid=5795 00:03:43.180 10:35:03 -- pm/common@50 -- $ kill -TERM 5795 00:03:43.180 10:35:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.180 10:35:03 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:43.180 10:35:03 -- pm/common@44 -- $ pid=5796 00:03:43.180 10:35:03 -- pm/common@50 -- $ kill -TERM 5796 00:03:43.180 10:35:03 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:43.180 10:35:03 -- common/autotest_common.sh@1681 -- # lcov --version 00:03:43.180 10:35:03 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:43.180 10:35:03 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:43.180 10:35:03 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:43.180 10:35:03 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:43.180 10:35:03 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:43.180 10:35:03 -- scripts/common.sh@336 -- # IFS=.-: 00:03:43.180 10:35:03 -- scripts/common.sh@336 -- # read -ra ver1 00:03:43.180 10:35:03 -- scripts/common.sh@337 -- # IFS=.-: 00:03:43.180 10:35:03 -- scripts/common.sh@337 -- # read -ra ver2 00:03:43.180 10:35:03 -- scripts/common.sh@338 -- # local 'op=<' 00:03:43.180 10:35:03 -- scripts/common.sh@340 -- # ver1_l=2 00:03:43.180 10:35:03 -- scripts/common.sh@341 -- # ver2_l=1 00:03:43.180 10:35:03 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:43.180 10:35:03 -- scripts/common.sh@344 -- # case "$op" in 00:03:43.180 10:35:03 -- scripts/common.sh@345 -- # : 1 00:03:43.180 10:35:03 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:43.180 10:35:03 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:43.181 10:35:03 -- scripts/common.sh@365 -- # decimal 1 00:03:43.181 10:35:03 -- scripts/common.sh@353 -- # local d=1 00:03:43.181 10:35:03 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:43.181 10:35:03 -- scripts/common.sh@355 -- # echo 1 00:03:43.181 10:35:03 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:43.181 10:35:03 -- scripts/common.sh@366 -- # decimal 2 00:03:43.181 10:35:03 -- scripts/common.sh@353 -- # local d=2 00:03:43.181 10:35:03 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:43.181 10:35:03 -- scripts/common.sh@355 -- # echo 2 00:03:43.181 10:35:03 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:43.181 10:35:03 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:43.181 10:35:03 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:43.181 10:35:03 -- scripts/common.sh@368 -- # return 0 00:03:43.181 10:35:03 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:43.181 10:35:03 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:43.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.181 --rc genhtml_branch_coverage=1 00:03:43.181 --rc genhtml_function_coverage=1 00:03:43.181 --rc genhtml_legend=1 00:03:43.181 --rc geninfo_all_blocks=1 00:03:43.181 --rc geninfo_unexecuted_blocks=1 00:03:43.181 00:03:43.181 ' 00:03:43.181 10:35:03 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:43.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.181 --rc genhtml_branch_coverage=1 00:03:43.181 --rc genhtml_function_coverage=1 00:03:43.181 --rc genhtml_legend=1 00:03:43.181 --rc geninfo_all_blocks=1 00:03:43.181 --rc geninfo_unexecuted_blocks=1 00:03:43.181 00:03:43.181 ' 00:03:43.181 10:35:03 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:43.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.181 --rc genhtml_branch_coverage=1 00:03:43.181 --rc genhtml_function_coverage=1 00:03:43.181 --rc genhtml_legend=1 00:03:43.181 --rc geninfo_all_blocks=1 00:03:43.181 --rc geninfo_unexecuted_blocks=1 00:03:43.181 00:03:43.181 ' 00:03:43.181 10:35:03 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:43.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.181 --rc genhtml_branch_coverage=1 00:03:43.181 --rc genhtml_function_coverage=1 00:03:43.181 --rc genhtml_legend=1 00:03:43.181 --rc geninfo_all_blocks=1 00:03:43.181 --rc geninfo_unexecuted_blocks=1 00:03:43.181 00:03:43.181 ' 00:03:43.181 10:35:03 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:43.181 10:35:03 -- nvmf/common.sh@7 -- # uname -s 00:03:43.181 10:35:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:43.181 10:35:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:43.181 10:35:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:43.181 10:35:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:43.181 10:35:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:43.181 10:35:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:43.181 10:35:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:43.181 10:35:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:43.181 10:35:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:43.181 10:35:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:43.181 10:35:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85329bc7-ddd3-4e8b-9a4d-f786d01c4aeb 00:03:43.181 10:35:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=85329bc7-ddd3-4e8b-9a4d-f786d01c4aeb 00:03:43.181 10:35:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:43.181 10:35:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:43.181 10:35:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:43.181 10:35:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:43.181 10:35:03 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:43.181 10:35:03 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:43.181 10:35:03 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:43.181 10:35:03 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:43.181 10:35:03 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:43.181 10:35:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.181 10:35:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.181 10:35:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.181 10:35:03 -- paths/export.sh@5 -- # export PATH 00:03:43.181 10:35:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.181 10:35:03 -- nvmf/common.sh@51 -- # : 0 00:03:43.181 10:35:03 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:43.181 10:35:03 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:43.181 10:35:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:43.181 10:35:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:43.181 10:35:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:43.181 10:35:03 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:43.181 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:43.181 10:35:03 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:43.181 10:35:03 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:43.181 10:35:03 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:43.181 10:35:03 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:43.181 10:35:03 -- spdk/autotest.sh@32 -- # uname -s 00:03:43.181 10:35:03 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:43.181 10:35:03 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:43.181 10:35:03 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:43.181 10:35:03 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:43.181 10:35:03 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:43.181 10:35:03 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:43.181 10:35:03 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:43.181 10:35:03 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:43.181 10:35:03 -- spdk/autotest.sh@48 -- # udevadm_pid=67892 00:03:43.181 10:35:03 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:43.181 10:35:03 -- pm/common@17 -- # local monitor 00:03:43.181 10:35:03 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.181 10:35:03 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.181 10:35:03 -- pm/common@25 -- # sleep 1 00:03:43.181 10:35:03 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:43.181 10:35:03 -- pm/common@21 -- # date +%s 00:03:43.181 10:35:03 -- pm/common@21 -- # date +%s 00:03:43.181 10:35:03 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728383703 00:03:43.181 10:35:03 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728383703 00:03:43.181 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728383703_collect-cpu-load.pm.log 00:03:43.181 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728383703_collect-vmstat.pm.log 00:03:44.566 10:35:04 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:44.566 10:35:04 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:44.566 10:35:04 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:44.566 10:35:04 -- common/autotest_common.sh@10 -- # set +x 00:03:44.566 10:35:04 -- spdk/autotest.sh@59 -- # create_test_list 00:03:44.566 10:35:04 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:44.566 10:35:04 -- common/autotest_common.sh@10 -- # set +x 00:03:44.566 10:35:04 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:44.566 10:35:04 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:44.566 10:35:04 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:44.566 10:35:04 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:44.566 10:35:04 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:44.566 10:35:04 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:44.566 10:35:04 -- common/autotest_common.sh@1455 -- # uname 00:03:44.566 10:35:04 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:44.566 10:35:04 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:44.566 10:35:04 -- common/autotest_common.sh@1475 -- # uname 00:03:44.566 10:35:04 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:44.566 10:35:04 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:44.566 10:35:04 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:44.566 lcov: LCOV version 1.15 00:03:44.566 10:35:04 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:59.471 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:59.471 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:14.398 10:35:33 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:14.398 10:35:33 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:14.398 10:35:33 -- common/autotest_common.sh@10 -- # set +x 00:04:14.398 10:35:33 -- spdk/autotest.sh@78 -- # rm -f 00:04:14.398 10:35:33 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:14.398 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:14.398 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:14.398 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:14.398 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:14.398 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:14.398 10:35:34 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:14.398 10:35:34 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:14.398 10:35:34 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:14.398 10:35:34 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:14.398 10:35:34 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:14.398 10:35:34 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:14.398 10:35:34 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:14.398 10:35:34 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:14.398 10:35:34 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:14.398 10:35:34 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:14.398 10:35:34 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1c1n1 00:04:14.398 10:35:34 -- common/autotest_common.sh@1648 -- # local device=nvme1c1n1 00:04:14.398 10:35:34 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:04:14.398 10:35:34 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:14.398 10:35:34 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:14.398 10:35:34 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:14.398 10:35:34 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:14.398 10:35:34 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:14.398 10:35:34 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:14.398 10:35:34 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:14.398 10:35:34 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:04:14.398 10:35:34 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:04:14.398 10:35:34 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:14.398 10:35:34 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:14.398 10:35:34 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:14.398 10:35:34 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:04:14.398 10:35:34 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:04:14.398 10:35:34 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:14.398 10:35:34 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:14.398 10:35:34 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:14.398 10:35:34 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n2 00:04:14.398 10:35:34 -- common/autotest_common.sh@1648 -- # local device=nvme3n2 00:04:14.398 10:35:34 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n2/queue/zoned ]] 00:04:14.398 10:35:34 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:14.398 10:35:34 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:14.398 10:35:34 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n3 00:04:14.398 10:35:34 -- common/autotest_common.sh@1648 -- # local device=nvme3n3 00:04:14.398 10:35:34 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n3/queue/zoned ]] 00:04:14.398 10:35:34 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:14.398 10:35:34 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:14.398 10:35:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:14.398 10:35:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:14.398 10:35:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:14.398 10:35:34 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:14.398 10:35:34 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:14.398 No valid GPT data, bailing 00:04:14.399 10:35:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:14.399 10:35:34 -- scripts/common.sh@394 -- # pt= 00:04:14.399 10:35:34 -- scripts/common.sh@395 -- # return 1 00:04:14.399 10:35:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:14.399 1+0 records in 00:04:14.399 1+0 records out 00:04:14.399 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122276 s, 85.8 MB/s 00:04:14.399 10:35:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:14.399 10:35:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:14.399 10:35:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:14.399 10:35:34 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:14.399 10:35:34 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:14.399 No valid GPT data, bailing 00:04:14.399 10:35:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:14.399 10:35:34 -- scripts/common.sh@394 -- # pt= 00:04:14.399 10:35:34 -- scripts/common.sh@395 -- # return 1 00:04:14.399 10:35:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:14.399 1+0 records in 00:04:14.399 1+0 records out 00:04:14.399 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00458137 s, 229 MB/s 00:04:14.399 10:35:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:14.399 10:35:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:14.399 10:35:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:14.399 10:35:34 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:14.399 10:35:34 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:14.399 No valid GPT data, bailing 00:04:14.399 10:35:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:14.399 10:35:34 -- scripts/common.sh@394 -- # pt= 00:04:14.399 10:35:34 -- scripts/common.sh@395 -- # return 1 00:04:14.399 10:35:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:14.399 1+0 records in 00:04:14.399 1+0 records out 00:04:14.399 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00577781 s, 181 MB/s 00:04:14.399 10:35:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:14.399 10:35:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:14.399 10:35:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:14.399 10:35:34 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:14.399 10:35:34 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:14.399 No valid GPT data, bailing 00:04:14.399 10:35:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:14.399 10:35:34 -- scripts/common.sh@394 -- # pt= 00:04:14.399 10:35:34 -- scripts/common.sh@395 -- # return 1 00:04:14.399 10:35:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:14.399 1+0 records in 00:04:14.399 1+0 records out 00:04:14.399 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00491844 s, 213 MB/s 00:04:14.399 10:35:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:14.399 10:35:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:14.399 10:35:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n2 00:04:14.399 10:35:34 -- scripts/common.sh@381 -- # local block=/dev/nvme3n2 pt 00:04:14.399 10:35:34 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n2 00:04:14.399 No valid GPT data, bailing 00:04:14.399 10:35:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n2 00:04:14.399 10:35:34 -- scripts/common.sh@394 -- # pt= 00:04:14.399 10:35:34 -- scripts/common.sh@395 -- # return 1 00:04:14.399 10:35:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n2 bs=1M count=1 00:04:14.399 1+0 records in 00:04:14.399 1+0 records out 00:04:14.399 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00393676 s, 266 MB/s 00:04:14.399 10:35:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:14.399 10:35:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:14.399 10:35:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n3 00:04:14.399 10:35:34 -- scripts/common.sh@381 -- # local block=/dev/nvme3n3 pt 00:04:14.399 10:35:34 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n3 00:04:14.399 No valid GPT data, bailing 00:04:14.399 10:35:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n3 00:04:14.399 10:35:34 -- scripts/common.sh@394 -- # pt= 00:04:14.399 10:35:34 -- scripts/common.sh@395 -- # return 1 00:04:14.399 10:35:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n3 bs=1M count=1 00:04:14.399 1+0 records in 00:04:14.399 1+0 records out 00:04:14.399 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00505694 s, 207 MB/s 00:04:14.399 10:35:34 -- spdk/autotest.sh@105 -- # sync 00:04:14.399 10:35:34 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:14.399 10:35:34 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:14.399 10:35:34 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:16.312 10:35:36 -- spdk/autotest.sh@111 -- # uname -s 00:04:16.312 10:35:36 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:16.312 10:35:36 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:16.312 10:35:36 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:16.312 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:16.884 Hugepages 00:04:16.884 node hugesize free / total 00:04:16.884 node0 1048576kB 0 / 0 00:04:16.884 node0 2048kB 0 / 0 00:04:16.884 00:04:16.884 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:16.884 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:16.884 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:17.146 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:04:17.146 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:04:17.146 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:17.146 10:35:37 -- spdk/autotest.sh@117 -- # uname -s 00:04:17.146 10:35:37 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:17.146 10:35:37 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:17.146 10:35:37 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:17.720 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:18.291 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:18.291 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:18.291 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:18.291 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:18.291 10:35:38 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:19.233 10:35:39 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:19.233 10:35:39 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:19.233 10:35:39 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:19.233 10:35:39 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:19.233 10:35:39 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:19.233 10:35:39 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:19.233 10:35:39 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:19.233 10:35:39 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:19.233 10:35:39 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:19.233 10:35:39 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:04:19.233 10:35:39 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:19.233 10:35:39 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:19.806 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:19.806 Waiting for block devices as requested 00:04:19.806 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:19.806 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:20.067 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:20.068 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:25.359 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:25.359 10:35:45 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:25.359 10:35:45 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:25.359 10:35:45 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:25.359 10:35:45 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:25.359 10:35:45 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:25.359 10:35:45 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:25.359 10:35:45 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:25.359 10:35:45 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:25.359 10:35:45 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:25.359 10:35:45 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:25.359 10:35:45 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:25.359 10:35:45 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:25.359 10:35:45 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:25.359 10:35:45 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:25.359 10:35:45 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:25.359 10:35:45 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:25.359 10:35:45 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:25.359 10:35:45 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:25.359 10:35:45 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:25.359 10:35:45 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:25.359 10:35:45 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:25.359 10:35:45 -- common/autotest_common.sh@1541 -- # continue 00:04:25.359 10:35:45 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:25.359 10:35:45 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:25.359 10:35:45 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:25.359 10:35:45 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:25.359 10:35:45 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:25.359 10:35:45 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:25.359 10:35:45 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:25.359 10:35:45 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:25.359 10:35:45 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:25.359 10:35:45 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:25.359 10:35:45 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:25.359 10:35:45 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:25.359 10:35:45 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:25.360 10:35:45 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:25.360 10:35:45 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:25.360 10:35:45 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:25.360 10:35:45 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:25.360 10:35:45 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:25.360 10:35:45 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:25.360 10:35:45 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:25.360 10:35:45 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:25.360 10:35:45 -- common/autotest_common.sh@1541 -- # continue 00:04:25.360 10:35:45 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:25.360 10:35:45 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:25.360 10:35:45 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:04:25.360 10:35:45 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:25.360 10:35:45 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:25.360 10:35:45 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:25.360 10:35:45 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:25.360 10:35:45 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:04:25.360 10:35:45 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:04:25.360 10:35:45 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:04:25.360 10:35:45 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:04:25.360 10:35:45 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:25.360 10:35:45 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:25.360 10:35:45 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:25.360 10:35:45 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:25.360 10:35:45 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:25.360 10:35:45 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:04:25.360 10:35:45 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:25.360 10:35:45 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:25.360 10:35:45 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:25.360 10:35:45 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:25.360 10:35:45 -- common/autotest_common.sh@1541 -- # continue 00:04:25.360 10:35:45 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:25.360 10:35:45 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:25.360 10:35:45 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:25.360 10:35:45 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:04:25.360 10:35:45 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:25.360 10:35:45 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:25.360 10:35:45 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:25.360 10:35:45 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:04:25.360 10:35:45 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:04:25.360 10:35:45 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:04:25.360 10:35:45 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:04:25.360 10:35:45 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:25.360 10:35:45 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:25.360 10:35:45 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:25.360 10:35:45 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:25.360 10:35:45 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:25.360 10:35:45 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:25.360 10:35:45 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:04:25.360 10:35:45 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:25.360 10:35:45 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:25.360 10:35:45 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:25.360 10:35:45 -- common/autotest_common.sh@1541 -- # continue 00:04:25.360 10:35:45 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:25.360 10:35:45 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:25.360 10:35:45 -- common/autotest_common.sh@10 -- # set +x 00:04:25.360 10:35:45 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:25.360 10:35:45 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:25.360 10:35:45 -- common/autotest_common.sh@10 -- # set +x 00:04:25.360 10:35:45 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:25.932 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:26.504 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:26.504 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:26.504 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:26.504 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:26.504 10:35:46 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:26.504 10:35:46 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:26.504 10:35:46 -- common/autotest_common.sh@10 -- # set +x 00:04:26.504 10:35:46 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:26.504 10:35:46 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:26.504 10:35:46 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:26.504 10:35:46 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:26.504 10:35:46 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:26.504 10:35:46 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:26.504 10:35:46 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:26.504 10:35:46 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:26.504 10:35:47 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:26.504 10:35:47 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:26.504 10:35:47 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:26.504 10:35:47 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:26.504 10:35:47 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:26.504 10:35:47 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:04:26.504 10:35:47 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:26.504 10:35:47 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:26.504 10:35:47 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:26.504 10:35:47 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:26.504 10:35:47 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:26.504 10:35:47 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:26.504 10:35:47 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:26.504 10:35:47 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:26.504 10:35:47 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:26.504 10:35:47 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:26.504 10:35:47 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:26.504 10:35:47 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:26.504 10:35:47 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:26.504 10:35:47 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:26.765 10:35:47 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:26.765 10:35:47 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:26.765 10:35:47 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:26.765 10:35:47 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:26.765 10:35:47 -- common/autotest_common.sh@1570 -- # return 0 00:04:26.765 10:35:47 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:26.765 10:35:47 -- common/autotest_common.sh@1578 -- # return 0 00:04:26.765 10:35:47 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:26.765 10:35:47 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:26.765 10:35:47 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:26.765 10:35:47 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:26.765 10:35:47 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:26.765 10:35:47 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:26.765 10:35:47 -- common/autotest_common.sh@10 -- # set +x 00:04:26.765 10:35:47 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:26.766 10:35:47 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:26.766 10:35:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:26.766 10:35:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:26.766 10:35:47 -- common/autotest_common.sh@10 -- # set +x 00:04:26.766 ************************************ 00:04:26.766 START TEST env 00:04:26.766 ************************************ 00:04:26.766 10:35:47 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:26.766 * Looking for test storage... 00:04:26.766 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:26.766 10:35:47 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:26.766 10:35:47 env -- common/autotest_common.sh@1681 -- # lcov --version 00:04:26.766 10:35:47 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:26.766 10:35:47 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:26.766 10:35:47 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.766 10:35:47 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.766 10:35:47 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.766 10:35:47 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.766 10:35:47 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.766 10:35:47 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.766 10:35:47 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.766 10:35:47 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.766 10:35:47 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.766 10:35:47 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.766 10:35:47 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.766 10:35:47 env -- scripts/common.sh@344 -- # case "$op" in 00:04:26.766 10:35:47 env -- scripts/common.sh@345 -- # : 1 00:04:26.766 10:35:47 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.766 10:35:47 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.766 10:35:47 env -- scripts/common.sh@365 -- # decimal 1 00:04:26.766 10:35:47 env -- scripts/common.sh@353 -- # local d=1 00:04:26.766 10:35:47 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.766 10:35:47 env -- scripts/common.sh@355 -- # echo 1 00:04:26.766 10:35:47 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.766 10:35:47 env -- scripts/common.sh@366 -- # decimal 2 00:04:26.766 10:35:47 env -- scripts/common.sh@353 -- # local d=2 00:04:26.766 10:35:47 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.766 10:35:47 env -- scripts/common.sh@355 -- # echo 2 00:04:26.766 10:35:47 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.766 10:35:47 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.766 10:35:47 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.766 10:35:47 env -- scripts/common.sh@368 -- # return 0 00:04:26.766 10:35:47 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.766 10:35:47 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:26.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.766 --rc genhtml_branch_coverage=1 00:04:26.766 --rc genhtml_function_coverage=1 00:04:26.766 --rc genhtml_legend=1 00:04:26.766 --rc geninfo_all_blocks=1 00:04:26.766 --rc geninfo_unexecuted_blocks=1 00:04:26.766 00:04:26.766 ' 00:04:26.766 10:35:47 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:26.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.766 --rc genhtml_branch_coverage=1 00:04:26.766 --rc genhtml_function_coverage=1 00:04:26.766 --rc genhtml_legend=1 00:04:26.766 --rc geninfo_all_blocks=1 00:04:26.766 --rc geninfo_unexecuted_blocks=1 00:04:26.766 00:04:26.766 ' 00:04:26.766 10:35:47 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:26.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.766 --rc genhtml_branch_coverage=1 00:04:26.766 --rc genhtml_function_coverage=1 00:04:26.766 --rc genhtml_legend=1 00:04:26.766 --rc geninfo_all_blocks=1 00:04:26.766 --rc geninfo_unexecuted_blocks=1 00:04:26.766 00:04:26.766 ' 00:04:26.766 10:35:47 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:26.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.766 --rc genhtml_branch_coverage=1 00:04:26.766 --rc genhtml_function_coverage=1 00:04:26.766 --rc genhtml_legend=1 00:04:26.766 --rc geninfo_all_blocks=1 00:04:26.766 --rc geninfo_unexecuted_blocks=1 00:04:26.766 00:04:26.766 ' 00:04:26.766 10:35:47 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:26.766 10:35:47 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:26.766 10:35:47 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:26.766 10:35:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:26.766 ************************************ 00:04:26.766 START TEST env_memory 00:04:26.766 ************************************ 00:04:26.766 10:35:47 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:26.766 00:04:26.766 00:04:26.766 CUnit - A unit testing framework for C - Version 2.1-3 00:04:26.766 http://cunit.sourceforge.net/ 00:04:26.766 00:04:26.766 00:04:26.766 Suite: memory 00:04:26.766 Test: alloc and free memory map ...[2024-10-08 10:35:47.339052] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:27.051 passed 00:04:27.051 Test: mem map translation ...[2024-10-08 10:35:47.377954] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:27.051 [2024-10-08 10:35:47.378002] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:27.051 [2024-10-08 10:35:47.378061] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:27.051 [2024-10-08 10:35:47.378076] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:27.051 passed 00:04:27.051 Test: mem map registration ...[2024-10-08 10:35:47.446263] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:27.051 [2024-10-08 10:35:47.446298] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:27.051 passed 00:04:27.051 Test: mem map adjacent registrations ...passed 00:04:27.051 00:04:27.051 Run Summary: Type Total Ran Passed Failed Inactive 00:04:27.051 suites 1 1 n/a 0 0 00:04:27.051 tests 4 4 4 0 0 00:04:27.051 asserts 152 152 152 0 n/a 00:04:27.051 00:04:27.051 Elapsed time = 0.233 seconds 00:04:27.051 00:04:27.051 real 0m0.270s 00:04:27.051 user 0m0.242s 00:04:27.051 sys 0m0.020s 00:04:27.051 10:35:47 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:27.051 ************************************ 00:04:27.051 END TEST env_memory 00:04:27.051 ************************************ 00:04:27.051 10:35:47 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:27.051 10:35:47 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:27.051 10:35:47 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.051 10:35:47 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.051 10:35:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.332 ************************************ 00:04:27.332 START TEST env_vtophys 00:04:27.332 ************************************ 00:04:27.332 10:35:47 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:27.332 EAL: lib.eal log level changed from notice to debug 00:04:27.332 EAL: Detected lcore 0 as core 0 on socket 0 00:04:27.332 EAL: Detected lcore 1 as core 0 on socket 0 00:04:27.332 EAL: Detected lcore 2 as core 0 on socket 0 00:04:27.332 EAL: Detected lcore 3 as core 0 on socket 0 00:04:27.332 EAL: Detected lcore 4 as core 0 on socket 0 00:04:27.332 EAL: Detected lcore 5 as core 0 on socket 0 00:04:27.332 EAL: Detected lcore 6 as core 0 on socket 0 00:04:27.332 EAL: Detected lcore 7 as core 0 on socket 0 00:04:27.332 EAL: Detected lcore 8 as core 0 on socket 0 00:04:27.332 EAL: Detected lcore 9 as core 0 on socket 0 00:04:27.332 EAL: Maximum logical cores by configuration: 128 00:04:27.332 EAL: Detected CPU lcores: 10 00:04:27.332 EAL: Detected NUMA nodes: 1 00:04:27.332 EAL: Checking presence of .so 'librte_eal.so.25.0' 00:04:27.332 EAL: Detected shared linkage of DPDK 00:04:27.332 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25.0 00:04:27.332 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25.0 00:04:27.332 EAL: Registered [vdev] bus. 00:04:27.332 EAL: bus.vdev log level changed from disabled to notice 00:04:27.332 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25.0 00:04:27.332 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25.0 00:04:27.332 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:27.332 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:27.332 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:04:27.332 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:04:27.332 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:04:27.332 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:04:27.332 EAL: No shared files mode enabled, IPC will be disabled 00:04:27.332 EAL: No shared files mode enabled, IPC is disabled 00:04:27.332 EAL: Selected IOVA mode 'PA' 00:04:27.332 EAL: Probing VFIO support... 00:04:27.332 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:27.332 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:27.332 EAL: Ask a virtual area of 0x2e000 bytes 00:04:27.332 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:27.332 EAL: Setting up physically contiguous memory... 00:04:27.332 EAL: Setting maximum number of open files to 524288 00:04:27.332 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:27.332 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:27.332 EAL: Ask a virtual area of 0x61000 bytes 00:04:27.332 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:27.332 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:27.332 EAL: Ask a virtual area of 0x400000000 bytes 00:04:27.332 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:27.332 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:27.332 EAL: Ask a virtual area of 0x61000 bytes 00:04:27.332 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:27.332 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:27.332 EAL: Ask a virtual area of 0x400000000 bytes 00:04:27.332 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:27.332 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:27.332 EAL: Ask a virtual area of 0x61000 bytes 00:04:27.332 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:27.332 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:27.332 EAL: Ask a virtual area of 0x400000000 bytes 00:04:27.332 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:27.332 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:27.332 EAL: Ask a virtual area of 0x61000 bytes 00:04:27.332 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:27.332 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:27.332 EAL: Ask a virtual area of 0x400000000 bytes 00:04:27.332 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:27.332 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:27.332 EAL: Hugepages will be freed exactly as allocated. 00:04:27.332 EAL: No shared files mode enabled, IPC is disabled 00:04:27.332 EAL: No shared files mode enabled, IPC is disabled 00:04:27.332 EAL: TSC frequency is ~2600000 KHz 00:04:27.332 EAL: Main lcore 0 is ready (tid=7f0a8d88ca40;cpuset=[0]) 00:04:27.332 EAL: Trying to obtain current memory policy. 00:04:27.332 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.332 EAL: Restoring previous memory policy: 0 00:04:27.332 EAL: request: mp_malloc_sync 00:04:27.332 EAL: No shared files mode enabled, IPC is disabled 00:04:27.332 EAL: Heap on socket 0 was expanded by 2MB 00:04:27.332 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:27.332 EAL: No shared files mode enabled, IPC is disabled 00:04:27.332 EAL: Mem event callback 'spdk:(nil)' registered 00:04:27.332 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:27.332 00:04:27.332 00:04:27.332 CUnit - A unit testing framework for C - Version 2.1-3 00:04:27.332 http://cunit.sourceforge.net/ 00:04:27.332 00:04:27.332 00:04:27.332 Suite: components_suite 00:04:27.593 Test: vtophys_malloc_test ...passed 00:04:27.593 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:27.593 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.593 EAL: Restoring previous memory policy: 4 00:04:27.593 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.593 EAL: request: mp_malloc_sync 00:04:27.593 EAL: No shared files mode enabled, IPC is disabled 00:04:27.593 EAL: Heap on socket 0 was expanded by 4MB 00:04:27.593 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.593 EAL: request: mp_malloc_sync 00:04:27.593 EAL: No shared files mode enabled, IPC is disabled 00:04:27.593 EAL: Heap on socket 0 was shrunk by 4MB 00:04:27.593 EAL: Trying to obtain current memory policy. 00:04:27.593 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.593 EAL: Restoring previous memory policy: 4 00:04:27.593 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.593 EAL: request: mp_malloc_sync 00:04:27.593 EAL: No shared files mode enabled, IPC is disabled 00:04:27.593 EAL: Heap on socket 0 was expanded by 6MB 00:04:27.593 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.593 EAL: request: mp_malloc_sync 00:04:27.593 EAL: No shared files mode enabled, IPC is disabled 00:04:27.593 EAL: Heap on socket 0 was shrunk by 6MB 00:04:27.593 EAL: Trying to obtain current memory policy. 00:04:27.593 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.593 EAL: Restoring previous memory policy: 4 00:04:27.593 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.593 EAL: request: mp_malloc_sync 00:04:27.593 EAL: No shared files mode enabled, IPC is disabled 00:04:27.593 EAL: Heap on socket 0 was expanded by 10MB 00:04:27.593 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.593 EAL: request: mp_malloc_sync 00:04:27.593 EAL: No shared files mode enabled, IPC is disabled 00:04:27.593 EAL: Heap on socket 0 was shrunk by 10MB 00:04:27.593 EAL: Trying to obtain current memory policy. 00:04:27.593 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.593 EAL: Restoring previous memory policy: 4 00:04:27.593 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.593 EAL: request: mp_malloc_sync 00:04:27.593 EAL: No shared files mode enabled, IPC is disabled 00:04:27.593 EAL: Heap on socket 0 was expanded by 18MB 00:04:27.593 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.593 EAL: request: mp_malloc_sync 00:04:27.593 EAL: No shared files mode enabled, IPC is disabled 00:04:27.593 EAL: Heap on socket 0 was shrunk by 18MB 00:04:27.593 EAL: Trying to obtain current memory policy. 00:04:27.593 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.593 EAL: Restoring previous memory policy: 4 00:04:27.593 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.593 EAL: request: mp_malloc_sync 00:04:27.593 EAL: No shared files mode enabled, IPC is disabled 00:04:27.593 EAL: Heap on socket 0 was expanded by 34MB 00:04:27.593 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.593 EAL: request: mp_malloc_sync 00:04:27.593 EAL: No shared files mode enabled, IPC is disabled 00:04:27.593 EAL: Heap on socket 0 was shrunk by 34MB 00:04:27.593 EAL: Trying to obtain current memory policy. 00:04:27.593 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.593 EAL: Restoring previous memory policy: 4 00:04:27.593 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.593 EAL: request: mp_malloc_sync 00:04:27.593 EAL: No shared files mode enabled, IPC is disabled 00:04:27.593 EAL: Heap on socket 0 was expanded by 66MB 00:04:27.593 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.593 EAL: request: mp_malloc_sync 00:04:27.593 EAL: No shared files mode enabled, IPC is disabled 00:04:27.593 EAL: Heap on socket 0 was shrunk by 66MB 00:04:27.594 EAL: Trying to obtain current memory policy. 00:04:27.594 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.594 EAL: Restoring previous memory policy: 4 00:04:27.594 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.594 EAL: request: mp_malloc_sync 00:04:27.594 EAL: No shared files mode enabled, IPC is disabled 00:04:27.594 EAL: Heap on socket 0 was expanded by 130MB 00:04:27.594 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.594 EAL: request: mp_malloc_sync 00:04:27.594 EAL: No shared files mode enabled, IPC is disabled 00:04:27.594 EAL: Heap on socket 0 was shrunk by 130MB 00:04:27.594 EAL: Trying to obtain current memory policy. 00:04:27.594 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.854 EAL: Restoring previous memory policy: 4 00:04:27.854 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.854 EAL: request: mp_malloc_sync 00:04:27.854 EAL: No shared files mode enabled, IPC is disabled 00:04:27.854 EAL: Heap on socket 0 was expanded by 258MB 00:04:27.854 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.854 EAL: request: mp_malloc_sync 00:04:27.854 EAL: No shared files mode enabled, IPC is disabled 00:04:27.854 EAL: Heap on socket 0 was shrunk by 258MB 00:04:27.854 EAL: Trying to obtain current memory policy. 00:04:27.854 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.854 EAL: Restoring previous memory policy: 4 00:04:27.854 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.854 EAL: request: mp_malloc_sync 00:04:27.854 EAL: No shared files mode enabled, IPC is disabled 00:04:27.854 EAL: Heap on socket 0 was expanded by 514MB 00:04:27.854 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.114 EAL: request: mp_malloc_sync 00:04:28.114 EAL: No shared files mode enabled, IPC is disabled 00:04:28.114 EAL: Heap on socket 0 was shrunk by 514MB 00:04:28.114 EAL: Trying to obtain current memory policy. 00:04:28.114 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.114 EAL: Restoring previous memory policy: 4 00:04:28.114 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.115 EAL: request: mp_malloc_sync 00:04:28.115 EAL: No shared files mode enabled, IPC is disabled 00:04:28.115 EAL: Heap on socket 0 was expanded by 1026MB 00:04:28.375 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.375 passed 00:04:28.375 00:04:28.375 Run Summary: Type Total Ran Passed Failed Inactive 00:04:28.375 suites 1 1 n/a 0 0 00:04:28.375 tests 2 2 2 0 0 00:04:28.375 asserts 5316 5316 5316 0 n/a 00:04:28.375 00:04:28.375 Elapsed time = 1.024 seconds 00:04:28.375 EAL: request: mp_malloc_sync 00:04:28.375 EAL: No shared files mode enabled, IPC is disabled 00:04:28.375 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:28.375 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.375 EAL: request: mp_malloc_sync 00:04:28.375 EAL: No shared files mode enabled, IPC is disabled 00:04:28.375 EAL: Heap on socket 0 was shrunk by 2MB 00:04:28.375 EAL: No shared files mode enabled, IPC is disabled 00:04:28.375 EAL: No shared files mode enabled, IPC is disabled 00:04:28.375 EAL: No shared files mode enabled, IPC is disabled 00:04:28.375 00:04:28.375 real 0m1.253s 00:04:28.375 user 0m0.493s 00:04:28.375 sys 0m0.628s 00:04:28.375 10:35:48 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:28.375 10:35:48 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:28.375 ************************************ 00:04:28.375 END TEST env_vtophys 00:04:28.375 ************************************ 00:04:28.375 10:35:48 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:28.375 10:35:48 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:28.375 10:35:48 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:28.375 10:35:48 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.375 ************************************ 00:04:28.375 START TEST env_pci 00:04:28.375 ************************************ 00:04:28.375 10:35:48 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:28.636 00:04:28.636 00:04:28.636 CUnit - A unit testing framework for C - Version 2.1-3 00:04:28.636 http://cunit.sourceforge.net/ 00:04:28.636 00:04:28.636 00:04:28.636 Suite: pci 00:04:28.636 Test: pci_hook ...[2024-10-08 10:35:48.965321] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 70636 has claimed it 00:04:28.636 passed 00:04:28.636 00:04:28.636 Run Summary: Type Total Ran Passed Failed Inactive 00:04:28.636 suites 1 1 n/a 0 0 00:04:28.636 tests 1 1 1 0 0 00:04:28.636 asserts 25 25 25 0 n/a 00:04:28.636 00:04:28.636 Elapsed time = 0.006 seconds 00:04:28.636 EAL: Cannot find device (10000:00:01.0) 00:04:28.636 EAL: Failed to attach device on primary process 00:04:28.636 ************************************ 00:04:28.636 END TEST env_pci 00:04:28.636 ************************************ 00:04:28.636 00:04:28.636 real 0m0.061s 00:04:28.636 user 0m0.024s 00:04:28.636 sys 0m0.035s 00:04:28.636 10:35:49 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:28.636 10:35:49 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:28.636 10:35:49 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:28.636 10:35:49 env -- env/env.sh@15 -- # uname 00:04:28.636 10:35:49 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:28.636 10:35:49 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:28.636 10:35:49 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:28.636 10:35:49 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:28.636 10:35:49 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:28.636 10:35:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.636 ************************************ 00:04:28.636 START TEST env_dpdk_post_init 00:04:28.636 ************************************ 00:04:28.636 10:35:49 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:28.636 EAL: Detected CPU lcores: 10 00:04:28.636 EAL: Detected NUMA nodes: 1 00:04:28.636 EAL: Detected shared linkage of DPDK 00:04:28.636 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:28.636 EAL: Selected IOVA mode 'PA' 00:04:28.897 Starting DPDK initialization... 00:04:28.898 Starting SPDK post initialization... 00:04:28.898 SPDK NVMe probe 00:04:28.898 Attaching to 0000:00:10.0 00:04:28.898 Attaching to 0000:00:11.0 00:04:28.898 Attaching to 0000:00:12.0 00:04:28.898 Attaching to 0000:00:13.0 00:04:28.898 Attached to 0000:00:11.0 00:04:28.898 Attached to 0000:00:13.0 00:04:28.898 Attached to 0000:00:10.0 00:04:28.898 Attached to 0000:00:12.0 00:04:28.898 Cleaning up... 00:04:28.898 ************************************ 00:04:28.898 END TEST env_dpdk_post_init 00:04:28.898 ************************************ 00:04:28.898 00:04:28.898 real 0m0.229s 00:04:28.898 user 0m0.058s 00:04:28.898 sys 0m0.072s 00:04:28.898 10:35:49 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:28.898 10:35:49 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:28.898 10:35:49 env -- env/env.sh@26 -- # uname 00:04:28.898 10:35:49 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:28.898 10:35:49 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:28.898 10:35:49 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:28.898 10:35:49 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:28.898 10:35:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.898 ************************************ 00:04:28.898 START TEST env_mem_callbacks 00:04:28.898 ************************************ 00:04:28.898 10:35:49 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:28.898 EAL: Detected CPU lcores: 10 00:04:28.898 EAL: Detected NUMA nodes: 1 00:04:28.898 EAL: Detected shared linkage of DPDK 00:04:28.898 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:28.898 EAL: Selected IOVA mode 'PA' 00:04:29.159 00:04:29.159 00:04:29.159 CUnit - A unit testing framework for C - Version 2.1-3 00:04:29.159 http://cunit.sourceforge.net/ 00:04:29.159 00:04:29.159 00:04:29.159 Suite: memory 00:04:29.159 Test: test ... 00:04:29.159 register 0x200000200000 2097152 00:04:29.159 malloc 3145728 00:04:29.159 register 0x200000400000 4194304 00:04:29.159 buf 0x200000500000 len 3145728 PASSED 00:04:29.159 malloc 64 00:04:29.159 buf 0x2000004fff40 len 64 PASSED 00:04:29.159 malloc 4194304 00:04:29.159 register 0x200000800000 6291456 00:04:29.159 buf 0x200000a00000 len 4194304 PASSED 00:04:29.159 free 0x200000500000 3145728 00:04:29.159 free 0x2000004fff40 64 00:04:29.159 unregister 0x200000400000 4194304 PASSED 00:04:29.159 free 0x200000a00000 4194304 00:04:29.159 unregister 0x200000800000 6291456 PASSED 00:04:29.159 malloc 8388608 00:04:29.159 register 0x200000400000 10485760 00:04:29.159 buf 0x200000600000 len 8388608 PASSED 00:04:29.159 free 0x200000600000 8388608 00:04:29.159 unregister 0x200000400000 10485760 PASSED 00:04:29.159 passed 00:04:29.159 00:04:29.159 Run Summary: Type Total Ran Passed Failed Inactive 00:04:29.159 suites 1 1 n/a 0 0 00:04:29.159 tests 1 1 1 0 0 00:04:29.159 asserts 15 15 15 0 n/a 00:04:29.159 00:04:29.159 Elapsed time = 0.010 seconds 00:04:29.159 00:04:29.159 real 0m0.171s 00:04:29.159 user 0m0.025s 00:04:29.159 sys 0m0.042s 00:04:29.159 10:35:49 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.159 10:35:49 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:29.159 ************************************ 00:04:29.159 END TEST env_mem_callbacks 00:04:29.159 ************************************ 00:04:29.159 00:04:29.159 real 0m2.469s 00:04:29.159 user 0m1.017s 00:04:29.159 sys 0m0.995s 00:04:29.159 ************************************ 00:04:29.159 END TEST env 00:04:29.159 ************************************ 00:04:29.159 10:35:49 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.159 10:35:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:29.159 10:35:49 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:29.159 10:35:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:29.159 10:35:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:29.159 10:35:49 -- common/autotest_common.sh@10 -- # set +x 00:04:29.159 ************************************ 00:04:29.159 START TEST rpc 00:04:29.159 ************************************ 00:04:29.159 10:35:49 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:29.159 * Looking for test storage... 00:04:29.159 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:29.159 10:35:49 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:29.159 10:35:49 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:29.159 10:35:49 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:29.421 10:35:49 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:29.421 10:35:49 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:29.421 10:35:49 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:29.421 10:35:49 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:29.421 10:35:49 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:29.421 10:35:49 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:29.421 10:35:49 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:29.421 10:35:49 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:29.421 10:35:49 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:29.421 10:35:49 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:29.421 10:35:49 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:29.421 10:35:49 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:29.421 10:35:49 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:29.421 10:35:49 rpc -- scripts/common.sh@345 -- # : 1 00:04:29.421 10:35:49 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:29.421 10:35:49 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:29.421 10:35:49 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:29.421 10:35:49 rpc -- scripts/common.sh@353 -- # local d=1 00:04:29.421 10:35:49 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:29.421 10:35:49 rpc -- scripts/common.sh@355 -- # echo 1 00:04:29.421 10:35:49 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:29.421 10:35:49 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:29.421 10:35:49 rpc -- scripts/common.sh@353 -- # local d=2 00:04:29.421 10:35:49 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:29.421 10:35:49 rpc -- scripts/common.sh@355 -- # echo 2 00:04:29.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.421 10:35:49 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:29.421 10:35:49 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:29.421 10:35:49 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:29.421 10:35:49 rpc -- scripts/common.sh@368 -- # return 0 00:04:29.421 10:35:49 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:29.421 10:35:49 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:29.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.421 --rc genhtml_branch_coverage=1 00:04:29.421 --rc genhtml_function_coverage=1 00:04:29.421 --rc genhtml_legend=1 00:04:29.421 --rc geninfo_all_blocks=1 00:04:29.421 --rc geninfo_unexecuted_blocks=1 00:04:29.421 00:04:29.421 ' 00:04:29.421 10:35:49 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:29.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.421 --rc genhtml_branch_coverage=1 00:04:29.421 --rc genhtml_function_coverage=1 00:04:29.421 --rc genhtml_legend=1 00:04:29.421 --rc geninfo_all_blocks=1 00:04:29.421 --rc geninfo_unexecuted_blocks=1 00:04:29.421 00:04:29.421 ' 00:04:29.421 10:35:49 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:29.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.421 --rc genhtml_branch_coverage=1 00:04:29.421 --rc genhtml_function_coverage=1 00:04:29.421 --rc genhtml_legend=1 00:04:29.421 --rc geninfo_all_blocks=1 00:04:29.421 --rc geninfo_unexecuted_blocks=1 00:04:29.421 00:04:29.421 ' 00:04:29.421 10:35:49 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:29.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:29.421 --rc genhtml_branch_coverage=1 00:04:29.421 --rc genhtml_function_coverage=1 00:04:29.421 --rc genhtml_legend=1 00:04:29.421 --rc geninfo_all_blocks=1 00:04:29.421 --rc geninfo_unexecuted_blocks=1 00:04:29.421 00:04:29.421 ' 00:04:29.421 10:35:49 rpc -- rpc/rpc.sh@65 -- # spdk_pid=70763 00:04:29.421 10:35:49 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.421 10:35:49 rpc -- rpc/rpc.sh@67 -- # waitforlisten 70763 00:04:29.421 10:35:49 rpc -- common/autotest_common.sh@831 -- # '[' -z 70763 ']' 00:04:29.421 10:35:49 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.421 10:35:49 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:29.421 10:35:49 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:29.421 10:35:49 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.421 10:35:49 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:29.421 10:35:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.421 [2024-10-08 10:35:49.858441] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:04:29.421 [2024-10-08 10:35:49.858590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70763 ] 00:04:29.421 [2024-10-08 10:35:49.990530] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:29.682 [2024-10-08 10:35:50.011740] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.682 [2024-10-08 10:35:50.061916] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:29.682 [2024-10-08 10:35:50.061988] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 70763' to capture a snapshot of events at runtime. 00:04:29.682 [2024-10-08 10:35:50.061999] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:29.682 [2024-10-08 10:35:50.062011] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:29.682 [2024-10-08 10:35:50.062019] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid70763 for offline analysis/debug. 00:04:29.682 [2024-10-08 10:35:50.062418] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.254 10:35:50 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:30.254 10:35:50 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:30.254 10:35:50 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:30.254 10:35:50 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:30.254 10:35:50 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:30.254 10:35:50 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:30.254 10:35:50 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.254 10:35:50 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.254 10:35:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.254 ************************************ 00:04:30.254 START TEST rpc_integrity 00:04:30.254 ************************************ 00:04:30.254 10:35:50 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:30.254 10:35:50 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:30.254 10:35:50 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.254 10:35:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.254 10:35:50 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.254 10:35:50 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:30.254 10:35:50 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:30.254 10:35:50 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:30.254 10:35:50 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:30.254 10:35:50 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.254 10:35:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.254 10:35:50 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.254 10:35:50 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:30.254 10:35:50 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:30.254 10:35:50 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.254 10:35:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.254 10:35:50 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.254 10:35:50 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:30.254 { 00:04:30.254 "name": "Malloc0", 00:04:30.254 "aliases": [ 00:04:30.254 "899567ff-648a-4512-ada6-49a4114f0924" 00:04:30.254 ], 00:04:30.254 "product_name": "Malloc disk", 00:04:30.254 "block_size": 512, 00:04:30.254 "num_blocks": 16384, 00:04:30.254 "uuid": "899567ff-648a-4512-ada6-49a4114f0924", 00:04:30.254 "assigned_rate_limits": { 00:04:30.254 "rw_ios_per_sec": 0, 00:04:30.254 "rw_mbytes_per_sec": 0, 00:04:30.254 "r_mbytes_per_sec": 0, 00:04:30.254 "w_mbytes_per_sec": 0 00:04:30.254 }, 00:04:30.254 "claimed": false, 00:04:30.254 "zoned": false, 00:04:30.254 "supported_io_types": { 00:04:30.254 "read": true, 00:04:30.254 "write": true, 00:04:30.254 "unmap": true, 00:04:30.254 "flush": true, 00:04:30.254 "reset": true, 00:04:30.254 "nvme_admin": false, 00:04:30.254 "nvme_io": false, 00:04:30.254 "nvme_io_md": false, 00:04:30.254 "write_zeroes": true, 00:04:30.254 "zcopy": true, 00:04:30.254 "get_zone_info": false, 00:04:30.254 "zone_management": false, 00:04:30.254 "zone_append": false, 00:04:30.254 "compare": false, 00:04:30.254 "compare_and_write": false, 00:04:30.254 "abort": true, 00:04:30.254 "seek_hole": false, 00:04:30.254 "seek_data": false, 00:04:30.254 "copy": true, 00:04:30.254 "nvme_iov_md": false 00:04:30.254 }, 00:04:30.254 "memory_domains": [ 00:04:30.254 { 00:04:30.254 "dma_device_id": "system", 00:04:30.254 "dma_device_type": 1 00:04:30.254 }, 00:04:30.254 { 00:04:30.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.254 "dma_device_type": 2 00:04:30.254 } 00:04:30.254 ], 00:04:30.254 "driver_specific": {} 00:04:30.254 } 00:04:30.254 ]' 00:04:30.254 10:35:50 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:30.254 10:35:50 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:30.254 10:35:50 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:30.254 10:35:50 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.254 10:35:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.516 [2024-10-08 10:35:50.830936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:30.516 [2024-10-08 10:35:50.831001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:30.516 [2024-10-08 10:35:50.831026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:30.516 [2024-10-08 10:35:50.831037] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:30.516 [2024-10-08 10:35:50.833440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:30.516 [2024-10-08 10:35:50.833485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:30.516 Passthru0 00:04:30.516 10:35:50 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.516 10:35:50 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:30.516 10:35:50 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.516 10:35:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.516 10:35:50 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.516 10:35:50 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:30.516 { 00:04:30.516 "name": "Malloc0", 00:04:30.516 "aliases": [ 00:04:30.516 "899567ff-648a-4512-ada6-49a4114f0924" 00:04:30.516 ], 00:04:30.516 "product_name": "Malloc disk", 00:04:30.516 "block_size": 512, 00:04:30.516 "num_blocks": 16384, 00:04:30.516 "uuid": "899567ff-648a-4512-ada6-49a4114f0924", 00:04:30.516 "assigned_rate_limits": { 00:04:30.516 "rw_ios_per_sec": 0, 00:04:30.516 "rw_mbytes_per_sec": 0, 00:04:30.516 "r_mbytes_per_sec": 0, 00:04:30.516 "w_mbytes_per_sec": 0 00:04:30.516 }, 00:04:30.516 "claimed": true, 00:04:30.516 "claim_type": "exclusive_write", 00:04:30.516 "zoned": false, 00:04:30.516 "supported_io_types": { 00:04:30.516 "read": true, 00:04:30.516 "write": true, 00:04:30.516 "unmap": true, 00:04:30.516 "flush": true, 00:04:30.516 "reset": true, 00:04:30.516 "nvme_admin": false, 00:04:30.516 "nvme_io": false, 00:04:30.516 "nvme_io_md": false, 00:04:30.516 "write_zeroes": true, 00:04:30.516 "zcopy": true, 00:04:30.516 "get_zone_info": false, 00:04:30.516 "zone_management": false, 00:04:30.516 "zone_append": false, 00:04:30.516 "compare": false, 00:04:30.516 "compare_and_write": false, 00:04:30.516 "abort": true, 00:04:30.516 "seek_hole": false, 00:04:30.516 "seek_data": false, 00:04:30.516 "copy": true, 00:04:30.516 "nvme_iov_md": false 00:04:30.516 }, 00:04:30.516 "memory_domains": [ 00:04:30.516 { 00:04:30.516 "dma_device_id": "system", 00:04:30.516 "dma_device_type": 1 00:04:30.516 }, 00:04:30.516 { 00:04:30.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.516 "dma_device_type": 2 00:04:30.516 } 00:04:30.516 ], 00:04:30.516 "driver_specific": {} 00:04:30.516 }, 00:04:30.516 { 00:04:30.516 "name": "Passthru0", 00:04:30.516 "aliases": [ 00:04:30.517 "c1bd966c-569f-55cb-928b-8c5b7396f349" 00:04:30.517 ], 00:04:30.517 "product_name": "passthru", 00:04:30.517 "block_size": 512, 00:04:30.517 "num_blocks": 16384, 00:04:30.517 "uuid": "c1bd966c-569f-55cb-928b-8c5b7396f349", 00:04:30.517 "assigned_rate_limits": { 00:04:30.517 "rw_ios_per_sec": 0, 00:04:30.517 "rw_mbytes_per_sec": 0, 00:04:30.517 "r_mbytes_per_sec": 0, 00:04:30.517 "w_mbytes_per_sec": 0 00:04:30.517 }, 00:04:30.517 "claimed": false, 00:04:30.517 "zoned": false, 00:04:30.517 "supported_io_types": { 00:04:30.517 "read": true, 00:04:30.517 "write": true, 00:04:30.517 "unmap": true, 00:04:30.517 "flush": true, 00:04:30.517 "reset": true, 00:04:30.517 "nvme_admin": false, 00:04:30.517 "nvme_io": false, 00:04:30.517 "nvme_io_md": false, 00:04:30.517 "write_zeroes": true, 00:04:30.517 "zcopy": true, 00:04:30.517 "get_zone_info": false, 00:04:30.517 "zone_management": false, 00:04:30.517 "zone_append": false, 00:04:30.517 "compare": false, 00:04:30.517 "compare_and_write": false, 00:04:30.517 "abort": true, 00:04:30.517 "seek_hole": false, 00:04:30.517 "seek_data": false, 00:04:30.517 "copy": true, 00:04:30.517 "nvme_iov_md": false 00:04:30.517 }, 00:04:30.517 "memory_domains": [ 00:04:30.517 { 00:04:30.517 "dma_device_id": "system", 00:04:30.517 "dma_device_type": 1 00:04:30.517 }, 00:04:30.517 { 00:04:30.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.517 "dma_device_type": 2 00:04:30.517 } 00:04:30.517 ], 00:04:30.517 "driver_specific": { 00:04:30.517 "passthru": { 00:04:30.517 "name": "Passthru0", 00:04:30.517 "base_bdev_name": "Malloc0" 00:04:30.517 } 00:04:30.517 } 00:04:30.517 } 00:04:30.517 ]' 00:04:30.517 10:35:50 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:30.517 10:35:50 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:30.517 10:35:50 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:30.517 10:35:50 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.517 10:35:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.517 10:35:50 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.517 10:35:50 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:30.517 10:35:50 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.517 10:35:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.517 10:35:50 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.517 10:35:50 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:30.517 10:35:50 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.517 10:35:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.517 10:35:50 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.517 10:35:50 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:30.517 10:35:50 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:30.517 ************************************ 00:04:30.517 END TEST rpc_integrity 00:04:30.517 ************************************ 00:04:30.517 10:35:50 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:30.517 00:04:30.517 real 0m0.229s 00:04:30.517 user 0m0.127s 00:04:30.517 sys 0m0.029s 00:04:30.517 10:35:50 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:30.517 10:35:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.517 10:35:51 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:30.517 10:35:51 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.517 10:35:51 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.517 10:35:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.517 ************************************ 00:04:30.517 START TEST rpc_plugins 00:04:30.517 ************************************ 00:04:30.517 10:35:51 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:30.517 10:35:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:30.517 10:35:51 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.517 10:35:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:30.517 10:35:51 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.517 10:35:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:30.517 10:35:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:30.517 10:35:51 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.517 10:35:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:30.517 10:35:51 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.517 10:35:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:30.517 { 00:04:30.517 "name": "Malloc1", 00:04:30.517 "aliases": [ 00:04:30.517 "2b2248c5-390b-4de0-ad79-68fa83d350b9" 00:04:30.517 ], 00:04:30.517 "product_name": "Malloc disk", 00:04:30.517 "block_size": 4096, 00:04:30.517 "num_blocks": 256, 00:04:30.517 "uuid": "2b2248c5-390b-4de0-ad79-68fa83d350b9", 00:04:30.517 "assigned_rate_limits": { 00:04:30.517 "rw_ios_per_sec": 0, 00:04:30.517 "rw_mbytes_per_sec": 0, 00:04:30.517 "r_mbytes_per_sec": 0, 00:04:30.517 "w_mbytes_per_sec": 0 00:04:30.517 }, 00:04:30.517 "claimed": false, 00:04:30.517 "zoned": false, 00:04:30.517 "supported_io_types": { 00:04:30.517 "read": true, 00:04:30.517 "write": true, 00:04:30.517 "unmap": true, 00:04:30.517 "flush": true, 00:04:30.517 "reset": true, 00:04:30.517 "nvme_admin": false, 00:04:30.517 "nvme_io": false, 00:04:30.517 "nvme_io_md": false, 00:04:30.517 "write_zeroes": true, 00:04:30.517 "zcopy": true, 00:04:30.517 "get_zone_info": false, 00:04:30.517 "zone_management": false, 00:04:30.517 "zone_append": false, 00:04:30.517 "compare": false, 00:04:30.517 "compare_and_write": false, 00:04:30.517 "abort": true, 00:04:30.517 "seek_hole": false, 00:04:30.517 "seek_data": false, 00:04:30.517 "copy": true, 00:04:30.517 "nvme_iov_md": false 00:04:30.517 }, 00:04:30.517 "memory_domains": [ 00:04:30.517 { 00:04:30.517 "dma_device_id": "system", 00:04:30.517 "dma_device_type": 1 00:04:30.517 }, 00:04:30.517 { 00:04:30.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.517 "dma_device_type": 2 00:04:30.517 } 00:04:30.517 ], 00:04:30.517 "driver_specific": {} 00:04:30.517 } 00:04:30.517 ]' 00:04:30.517 10:35:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:30.517 10:35:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:30.517 10:35:51 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:30.517 10:35:51 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.517 10:35:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:30.517 10:35:51 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.517 10:35:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:30.517 10:35:51 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.517 10:35:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:30.779 10:35:51 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.779 10:35:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:30.779 10:35:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:30.779 ************************************ 00:04:30.779 END TEST rpc_plugins 00:04:30.779 ************************************ 00:04:30.779 10:35:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:30.779 00:04:30.779 real 0m0.106s 00:04:30.779 user 0m0.058s 00:04:30.779 sys 0m0.012s 00:04:30.779 10:35:51 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:30.779 10:35:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:30.779 10:35:51 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:30.779 10:35:51 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.779 10:35:51 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.779 10:35:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.779 ************************************ 00:04:30.779 START TEST rpc_trace_cmd_test 00:04:30.779 ************************************ 00:04:30.779 10:35:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:30.779 10:35:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:30.779 10:35:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:30.779 10:35:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.779 10:35:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:30.779 10:35:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.779 10:35:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:30.779 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid70763", 00:04:30.779 "tpoint_group_mask": "0x8", 00:04:30.779 "iscsi_conn": { 00:04:30.779 "mask": "0x2", 00:04:30.779 "tpoint_mask": "0x0" 00:04:30.779 }, 00:04:30.779 "scsi": { 00:04:30.779 "mask": "0x4", 00:04:30.779 "tpoint_mask": "0x0" 00:04:30.779 }, 00:04:30.779 "bdev": { 00:04:30.779 "mask": "0x8", 00:04:30.779 "tpoint_mask": "0xffffffffffffffff" 00:04:30.779 }, 00:04:30.779 "nvmf_rdma": { 00:04:30.779 "mask": "0x10", 00:04:30.779 "tpoint_mask": "0x0" 00:04:30.779 }, 00:04:30.779 "nvmf_tcp": { 00:04:30.779 "mask": "0x20", 00:04:30.779 "tpoint_mask": "0x0" 00:04:30.779 }, 00:04:30.779 "ftl": { 00:04:30.779 "mask": "0x40", 00:04:30.779 "tpoint_mask": "0x0" 00:04:30.779 }, 00:04:30.779 "blobfs": { 00:04:30.779 "mask": "0x80", 00:04:30.779 "tpoint_mask": "0x0" 00:04:30.779 }, 00:04:30.779 "dsa": { 00:04:30.779 "mask": "0x200", 00:04:30.779 "tpoint_mask": "0x0" 00:04:30.779 }, 00:04:30.779 "thread": { 00:04:30.779 "mask": "0x400", 00:04:30.779 "tpoint_mask": "0x0" 00:04:30.779 }, 00:04:30.779 "nvme_pcie": { 00:04:30.779 "mask": "0x800", 00:04:30.779 "tpoint_mask": "0x0" 00:04:30.779 }, 00:04:30.779 "iaa": { 00:04:30.779 "mask": "0x1000", 00:04:30.779 "tpoint_mask": "0x0" 00:04:30.779 }, 00:04:30.779 "nvme_tcp": { 00:04:30.779 "mask": "0x2000", 00:04:30.779 "tpoint_mask": "0x0" 00:04:30.779 }, 00:04:30.779 "bdev_nvme": { 00:04:30.779 "mask": "0x4000", 00:04:30.779 "tpoint_mask": "0x0" 00:04:30.779 }, 00:04:30.779 "sock": { 00:04:30.779 "mask": "0x8000", 00:04:30.779 "tpoint_mask": "0x0" 00:04:30.779 }, 00:04:30.779 "blob": { 00:04:30.779 "mask": "0x10000", 00:04:30.779 "tpoint_mask": "0x0" 00:04:30.779 }, 00:04:30.779 "bdev_raid": { 00:04:30.779 "mask": "0x20000", 00:04:30.779 "tpoint_mask": "0x0" 00:04:30.779 }, 00:04:30.779 "scheduler": { 00:04:30.779 "mask": "0x40000", 00:04:30.779 "tpoint_mask": "0x0" 00:04:30.779 } 00:04:30.779 }' 00:04:30.779 10:35:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:30.779 10:35:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:30.779 10:35:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:30.779 10:35:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:30.779 10:35:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:30.779 10:35:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:30.780 10:35:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:30.780 10:35:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:30.780 10:35:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:30.780 ************************************ 00:04:30.780 END TEST rpc_trace_cmd_test 00:04:30.780 ************************************ 00:04:30.780 10:35:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:30.780 00:04:30.780 real 0m0.164s 00:04:30.780 user 0m0.128s 00:04:30.780 sys 0m0.026s 00:04:30.780 10:35:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:30.780 10:35:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:31.042 10:35:51 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:31.042 10:35:51 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:31.042 10:35:51 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:31.042 10:35:51 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:31.042 10:35:51 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:31.042 10:35:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.042 ************************************ 00:04:31.042 START TEST rpc_daemon_integrity 00:04:31.042 ************************************ 00:04:31.042 10:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:31.042 10:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:31.042 10:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.042 10:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.042 10:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.042 10:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:31.042 10:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:31.042 10:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:31.042 10:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:31.042 10:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.042 10:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.042 10:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.042 10:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:31.042 10:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:31.042 10:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.042 10:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.042 10:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.042 10:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:31.042 { 00:04:31.042 "name": "Malloc2", 00:04:31.042 "aliases": [ 00:04:31.042 "e1a7a8e4-1416-4d07-a66b-461429ec8fb3" 00:04:31.042 ], 00:04:31.042 "product_name": "Malloc disk", 00:04:31.042 "block_size": 512, 00:04:31.042 "num_blocks": 16384, 00:04:31.042 "uuid": "e1a7a8e4-1416-4d07-a66b-461429ec8fb3", 00:04:31.042 "assigned_rate_limits": { 00:04:31.042 "rw_ios_per_sec": 0, 00:04:31.042 "rw_mbytes_per_sec": 0, 00:04:31.042 "r_mbytes_per_sec": 0, 00:04:31.042 "w_mbytes_per_sec": 0 00:04:31.042 }, 00:04:31.042 "claimed": false, 00:04:31.042 "zoned": false, 00:04:31.042 "supported_io_types": { 00:04:31.042 "read": true, 00:04:31.042 "write": true, 00:04:31.042 "unmap": true, 00:04:31.042 "flush": true, 00:04:31.042 "reset": true, 00:04:31.042 "nvme_admin": false, 00:04:31.042 "nvme_io": false, 00:04:31.042 "nvme_io_md": false, 00:04:31.042 "write_zeroes": true, 00:04:31.042 "zcopy": true, 00:04:31.042 "get_zone_info": false, 00:04:31.042 "zone_management": false, 00:04:31.042 "zone_append": false, 00:04:31.042 "compare": false, 00:04:31.042 "compare_and_write": false, 00:04:31.042 "abort": true, 00:04:31.042 "seek_hole": false, 00:04:31.042 "seek_data": false, 00:04:31.042 "copy": true, 00:04:31.042 "nvme_iov_md": false 00:04:31.042 }, 00:04:31.042 "memory_domains": [ 00:04:31.042 { 00:04:31.042 "dma_device_id": "system", 00:04:31.042 "dma_device_type": 1 00:04:31.042 }, 00:04:31.042 { 00:04:31.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.042 "dma_device_type": 2 00:04:31.042 } 00:04:31.042 ], 00:04:31.042 "driver_specific": {} 00:04:31.042 } 00:04:31.042 ]' 00:04:31.042 10:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:31.042 10:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:31.042 10:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:31.042 10:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.042 10:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.042 [2024-10-08 10:35:51.520161] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:31.042 [2024-10-08 10:35:51.520364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:31.042 [2024-10-08 10:35:51.520394] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:31.042 [2024-10-08 10:35:51.520407] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:31.042 [2024-10-08 10:35:51.522911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:31.042 [2024-10-08 10:35:51.522958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:31.042 Passthru0 00:04:31.042 10:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.042 10:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:31.042 10:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.042 10:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.042 10:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.042 10:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:31.042 { 00:04:31.042 "name": "Malloc2", 00:04:31.042 "aliases": [ 00:04:31.042 "e1a7a8e4-1416-4d07-a66b-461429ec8fb3" 00:04:31.042 ], 00:04:31.042 "product_name": "Malloc disk", 00:04:31.042 "block_size": 512, 00:04:31.042 "num_blocks": 16384, 00:04:31.042 "uuid": "e1a7a8e4-1416-4d07-a66b-461429ec8fb3", 00:04:31.042 "assigned_rate_limits": { 00:04:31.042 "rw_ios_per_sec": 0, 00:04:31.042 "rw_mbytes_per_sec": 0, 00:04:31.042 "r_mbytes_per_sec": 0, 00:04:31.042 "w_mbytes_per_sec": 0 00:04:31.042 }, 00:04:31.042 "claimed": true, 00:04:31.042 "claim_type": "exclusive_write", 00:04:31.042 "zoned": false, 00:04:31.042 "supported_io_types": { 00:04:31.042 "read": true, 00:04:31.042 "write": true, 00:04:31.042 "unmap": true, 00:04:31.042 "flush": true, 00:04:31.042 "reset": true, 00:04:31.042 "nvme_admin": false, 00:04:31.042 "nvme_io": false, 00:04:31.042 "nvme_io_md": false, 00:04:31.042 "write_zeroes": true, 00:04:31.042 "zcopy": true, 00:04:31.042 "get_zone_info": false, 00:04:31.042 "zone_management": false, 00:04:31.042 "zone_append": false, 00:04:31.042 "compare": false, 00:04:31.042 "compare_and_write": false, 00:04:31.042 "abort": true, 00:04:31.042 "seek_hole": false, 00:04:31.042 "seek_data": false, 00:04:31.042 "copy": true, 00:04:31.042 "nvme_iov_md": false 00:04:31.042 }, 00:04:31.042 "memory_domains": [ 00:04:31.042 { 00:04:31.042 "dma_device_id": "system", 00:04:31.042 "dma_device_type": 1 00:04:31.042 }, 00:04:31.043 { 00:04:31.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.043 "dma_device_type": 2 00:04:31.043 } 00:04:31.043 ], 00:04:31.043 "driver_specific": {} 00:04:31.043 }, 00:04:31.043 { 00:04:31.043 "name": "Passthru0", 00:04:31.043 "aliases": [ 00:04:31.043 "0a38b8de-61f0-586a-9dfa-be1164ac5dea" 00:04:31.043 ], 00:04:31.043 "product_name": "passthru", 00:04:31.043 "block_size": 512, 00:04:31.043 "num_blocks": 16384, 00:04:31.043 "uuid": "0a38b8de-61f0-586a-9dfa-be1164ac5dea", 00:04:31.043 "assigned_rate_limits": { 00:04:31.043 "rw_ios_per_sec": 0, 00:04:31.043 "rw_mbytes_per_sec": 0, 00:04:31.043 "r_mbytes_per_sec": 0, 00:04:31.043 "w_mbytes_per_sec": 0 00:04:31.043 }, 00:04:31.043 "claimed": false, 00:04:31.043 "zoned": false, 00:04:31.043 "supported_io_types": { 00:04:31.043 "read": true, 00:04:31.043 "write": true, 00:04:31.043 "unmap": true, 00:04:31.043 "flush": true, 00:04:31.043 "reset": true, 00:04:31.043 "nvme_admin": false, 00:04:31.043 "nvme_io": false, 00:04:31.043 "nvme_io_md": false, 00:04:31.043 "write_zeroes": true, 00:04:31.043 "zcopy": true, 00:04:31.043 "get_zone_info": false, 00:04:31.043 "zone_management": false, 00:04:31.043 "zone_append": false, 00:04:31.043 "compare": false, 00:04:31.043 "compare_and_write": false, 00:04:31.043 "abort": true, 00:04:31.043 "seek_hole": false, 00:04:31.043 "seek_data": false, 00:04:31.043 "copy": true, 00:04:31.043 "nvme_iov_md": false 00:04:31.043 }, 00:04:31.043 "memory_domains": [ 00:04:31.043 { 00:04:31.043 "dma_device_id": "system", 00:04:31.043 "dma_device_type": 1 00:04:31.043 }, 00:04:31.043 { 00:04:31.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.043 "dma_device_type": 2 00:04:31.043 } 00:04:31.043 ], 00:04:31.043 "driver_specific": { 00:04:31.043 "passthru": { 00:04:31.043 "name": "Passthru0", 00:04:31.043 "base_bdev_name": "Malloc2" 00:04:31.043 } 00:04:31.043 } 00:04:31.043 } 00:04:31.043 ]' 00:04:31.043 10:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:31.043 10:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:31.043 10:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:31.043 10:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.043 10:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.043 10:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.043 10:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:31.043 10:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.043 10:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.043 10:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.043 10:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:31.043 10:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:31.043 10:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.043 10:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:31.043 10:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:31.043 10:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:31.304 ************************************ 00:04:31.304 END TEST rpc_daemon_integrity 00:04:31.304 ************************************ 00:04:31.304 10:35:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:31.304 00:04:31.304 real 0m0.235s 00:04:31.304 user 0m0.133s 00:04:31.304 sys 0m0.035s 00:04:31.304 10:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:31.304 10:35:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:31.304 10:35:51 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:31.304 10:35:51 rpc -- rpc/rpc.sh@84 -- # killprocess 70763 00:04:31.304 10:35:51 rpc -- common/autotest_common.sh@950 -- # '[' -z 70763 ']' 00:04:31.304 10:35:51 rpc -- common/autotest_common.sh@954 -- # kill -0 70763 00:04:31.304 10:35:51 rpc -- common/autotest_common.sh@955 -- # uname 00:04:31.304 10:35:51 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:31.304 10:35:51 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70763 00:04:31.304 10:35:51 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:31.304 killing process with pid 70763 00:04:31.304 10:35:51 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:31.304 10:35:51 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70763' 00:04:31.304 10:35:51 rpc -- common/autotest_common.sh@969 -- # kill 70763 00:04:31.304 10:35:51 rpc -- common/autotest_common.sh@974 -- # wait 70763 00:04:31.566 ************************************ 00:04:31.566 END TEST rpc 00:04:31.566 ************************************ 00:04:31.566 00:04:31.566 real 0m2.395s 00:04:31.566 user 0m2.728s 00:04:31.566 sys 0m0.693s 00:04:31.566 10:35:52 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:31.567 10:35:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.567 10:35:52 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:31.567 10:35:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:31.567 10:35:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:31.567 10:35:52 -- common/autotest_common.sh@10 -- # set +x 00:04:31.567 ************************************ 00:04:31.567 START TEST skip_rpc 00:04:31.567 ************************************ 00:04:31.567 10:35:52 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:31.828 * Looking for test storage... 00:04:31.828 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:31.828 10:35:52 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:31.828 10:35:52 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:31.828 10:35:52 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:31.828 10:35:52 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:31.828 10:35:52 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:31.828 10:35:52 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:31.828 10:35:52 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:31.828 10:35:52 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:31.828 10:35:52 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:31.828 10:35:52 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:31.828 10:35:52 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:31.828 10:35:52 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:31.828 10:35:52 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:31.828 10:35:52 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:31.828 10:35:52 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:31.828 10:35:52 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:31.828 10:35:52 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:31.828 10:35:52 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:31.828 10:35:52 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:31.828 10:35:52 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:31.828 10:35:52 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:31.828 10:35:52 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:31.828 10:35:52 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:31.828 10:35:52 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:31.828 10:35:52 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:31.828 10:35:52 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:31.828 10:35:52 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:31.828 10:35:52 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:31.828 10:35:52 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:31.828 10:35:52 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:31.828 10:35:52 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:31.828 10:35:52 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:31.828 10:35:52 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:31.828 10:35:52 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:31.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.828 --rc genhtml_branch_coverage=1 00:04:31.828 --rc genhtml_function_coverage=1 00:04:31.828 --rc genhtml_legend=1 00:04:31.828 --rc geninfo_all_blocks=1 00:04:31.828 --rc geninfo_unexecuted_blocks=1 00:04:31.828 00:04:31.828 ' 00:04:31.828 10:35:52 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:31.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.828 --rc genhtml_branch_coverage=1 00:04:31.828 --rc genhtml_function_coverage=1 00:04:31.828 --rc genhtml_legend=1 00:04:31.828 --rc geninfo_all_blocks=1 00:04:31.828 --rc geninfo_unexecuted_blocks=1 00:04:31.828 00:04:31.828 ' 00:04:31.828 10:35:52 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:31.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.828 --rc genhtml_branch_coverage=1 00:04:31.828 --rc genhtml_function_coverage=1 00:04:31.828 --rc genhtml_legend=1 00:04:31.828 --rc geninfo_all_blocks=1 00:04:31.828 --rc geninfo_unexecuted_blocks=1 00:04:31.828 00:04:31.828 ' 00:04:31.828 10:35:52 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:31.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.828 --rc genhtml_branch_coverage=1 00:04:31.828 --rc genhtml_function_coverage=1 00:04:31.828 --rc genhtml_legend=1 00:04:31.828 --rc geninfo_all_blocks=1 00:04:31.828 --rc geninfo_unexecuted_blocks=1 00:04:31.828 00:04:31.828 ' 00:04:31.828 10:35:52 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:31.828 10:35:52 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:31.828 10:35:52 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:31.828 10:35:52 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:31.828 10:35:52 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:31.828 10:35:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.828 ************************************ 00:04:31.828 START TEST skip_rpc 00:04:31.828 ************************************ 00:04:31.828 10:35:52 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:31.828 10:35:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=70964 00:04:31.828 10:35:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:31.828 10:35:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:31.828 10:35:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:31.828 [2024-10-08 10:35:52.354197] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:04:31.828 [2024-10-08 10:35:52.354516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70964 ] 00:04:32.090 [2024-10-08 10:35:52.487126] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:32.090 [2024-10-08 10:35:52.508451] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.090 [2024-10-08 10:35:52.558014] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.382 10:35:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:37.382 10:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:37.382 10:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:37.382 10:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:37.382 10:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:37.382 10:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:37.382 10:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:37.382 10:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:37.382 10:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.382 10:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.382 10:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:37.382 10:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:37.382 10:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:37.382 10:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:37.382 10:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:37.382 10:35:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:37.382 10:35:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 70964 00:04:37.382 10:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 70964 ']' 00:04:37.382 10:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 70964 00:04:37.382 10:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:37.382 10:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:37.382 10:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70964 00:04:37.382 10:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:37.382 10:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:37.382 10:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70964' 00:04:37.382 killing process with pid 70964 00:04:37.382 10:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 70964 00:04:37.382 10:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 70964 00:04:37.382 00:04:37.382 real 0m5.262s 00:04:37.382 user 0m4.854s 00:04:37.382 sys 0m0.307s 00:04:37.382 10:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.382 10:35:57 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.382 ************************************ 00:04:37.382 END TEST skip_rpc 00:04:37.382 ************************************ 00:04:37.382 10:35:57 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:37.382 10:35:57 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.382 10:35:57 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.382 10:35:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.382 ************************************ 00:04:37.382 START TEST skip_rpc_with_json 00:04:37.382 ************************************ 00:04:37.382 10:35:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:37.382 10:35:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:37.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.382 10:35:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=71046 00:04:37.382 10:35:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:37.382 10:35:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 71046 00:04:37.383 10:35:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 71046 ']' 00:04:37.383 10:35:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.383 10:35:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:37.383 10:35:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:37.383 10:35:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.383 10:35:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:37.383 10:35:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:37.383 [2024-10-08 10:35:57.661017] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:04:37.383 [2024-10-08 10:35:57.661129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71046 ] 00:04:37.383 [2024-10-08 10:35:57.789610] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:37.383 [2024-10-08 10:35:57.811870] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.383 [2024-10-08 10:35:57.850541] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.954 10:35:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:37.954 10:35:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:37.954 10:35:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:37.954 10:35:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.954 10:35:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:37.954 [2024-10-08 10:35:58.502137] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:37.954 request: 00:04:37.954 { 00:04:37.954 "trtype": "tcp", 00:04:37.954 "method": "nvmf_get_transports", 00:04:37.954 "req_id": 1 00:04:37.954 } 00:04:37.954 Got JSON-RPC error response 00:04:37.954 response: 00:04:37.954 { 00:04:37.954 "code": -19, 00:04:37.954 "message": "No such device" 00:04:37.954 } 00:04:37.954 10:35:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:37.954 10:35:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:37.954 10:35:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.954 10:35:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:37.954 [2024-10-08 10:35:58.514258] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:37.954 10:35:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.954 10:35:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:37.954 10:35:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.954 10:35:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:38.215 10:35:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.215 10:35:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:38.215 { 00:04:38.215 "subsystems": [ 00:04:38.215 { 00:04:38.215 "subsystem": "fsdev", 00:04:38.215 "config": [ 00:04:38.215 { 00:04:38.215 "method": "fsdev_set_opts", 00:04:38.215 "params": { 00:04:38.215 "fsdev_io_pool_size": 65535, 00:04:38.215 "fsdev_io_cache_size": 256 00:04:38.215 } 00:04:38.215 } 00:04:38.215 ] 00:04:38.215 }, 00:04:38.215 { 00:04:38.215 "subsystem": "keyring", 00:04:38.215 "config": [] 00:04:38.215 }, 00:04:38.215 { 00:04:38.215 "subsystem": "iobuf", 00:04:38.215 "config": [ 00:04:38.215 { 00:04:38.215 "method": "iobuf_set_options", 00:04:38.215 "params": { 00:04:38.215 "small_pool_count": 8192, 00:04:38.215 "large_pool_count": 1024, 00:04:38.215 "small_bufsize": 8192, 00:04:38.215 "large_bufsize": 135168 00:04:38.215 } 00:04:38.215 } 00:04:38.215 ] 00:04:38.215 }, 00:04:38.215 { 00:04:38.215 "subsystem": "sock", 00:04:38.215 "config": [ 00:04:38.216 { 00:04:38.216 "method": "sock_set_default_impl", 00:04:38.216 "params": { 00:04:38.216 "impl_name": "posix" 00:04:38.216 } 00:04:38.216 }, 00:04:38.216 { 00:04:38.216 "method": "sock_impl_set_options", 00:04:38.216 "params": { 00:04:38.216 "impl_name": "ssl", 00:04:38.216 "recv_buf_size": 4096, 00:04:38.216 "send_buf_size": 4096, 00:04:38.216 "enable_recv_pipe": true, 00:04:38.216 "enable_quickack": false, 00:04:38.216 "enable_placement_id": 0, 00:04:38.216 "enable_zerocopy_send_server": true, 00:04:38.216 "enable_zerocopy_send_client": false, 00:04:38.216 "zerocopy_threshold": 0, 00:04:38.216 "tls_version": 0, 00:04:38.216 "enable_ktls": false 00:04:38.216 } 00:04:38.216 }, 00:04:38.216 { 00:04:38.216 "method": "sock_impl_set_options", 00:04:38.216 "params": { 00:04:38.216 "impl_name": "posix", 00:04:38.216 "recv_buf_size": 2097152, 00:04:38.216 "send_buf_size": 2097152, 00:04:38.216 "enable_recv_pipe": true, 00:04:38.216 "enable_quickack": false, 00:04:38.216 "enable_placement_id": 0, 00:04:38.216 "enable_zerocopy_send_server": true, 00:04:38.216 "enable_zerocopy_send_client": false, 00:04:38.216 "zerocopy_threshold": 0, 00:04:38.216 "tls_version": 0, 00:04:38.216 "enable_ktls": false 00:04:38.216 } 00:04:38.216 } 00:04:38.216 ] 00:04:38.216 }, 00:04:38.216 { 00:04:38.216 "subsystem": "vmd", 00:04:38.216 "config": [] 00:04:38.216 }, 00:04:38.216 { 00:04:38.216 "subsystem": "accel", 00:04:38.216 "config": [ 00:04:38.216 { 00:04:38.216 "method": "accel_set_options", 00:04:38.216 "params": { 00:04:38.216 "small_cache_size": 128, 00:04:38.216 "large_cache_size": 16, 00:04:38.216 "task_count": 2048, 00:04:38.216 "sequence_count": 2048, 00:04:38.216 "buf_count": 2048 00:04:38.216 } 00:04:38.216 } 00:04:38.216 ] 00:04:38.216 }, 00:04:38.216 { 00:04:38.216 "subsystem": "bdev", 00:04:38.216 "config": [ 00:04:38.216 { 00:04:38.216 "method": "bdev_set_options", 00:04:38.216 "params": { 00:04:38.216 "bdev_io_pool_size": 65535, 00:04:38.216 "bdev_io_cache_size": 256, 00:04:38.216 "bdev_auto_examine": true, 00:04:38.216 "iobuf_small_cache_size": 128, 00:04:38.216 "iobuf_large_cache_size": 16 00:04:38.216 } 00:04:38.216 }, 00:04:38.216 { 00:04:38.216 "method": "bdev_raid_set_options", 00:04:38.216 "params": { 00:04:38.216 "process_window_size_kb": 1024, 00:04:38.216 "process_max_bandwidth_mb_sec": 0 00:04:38.216 } 00:04:38.216 }, 00:04:38.216 { 00:04:38.216 "method": "bdev_iscsi_set_options", 00:04:38.216 "params": { 00:04:38.216 "timeout_sec": 30 00:04:38.216 } 00:04:38.216 }, 00:04:38.216 { 00:04:38.216 "method": "bdev_nvme_set_options", 00:04:38.216 "params": { 00:04:38.216 "action_on_timeout": "none", 00:04:38.216 "timeout_us": 0, 00:04:38.216 "timeout_admin_us": 0, 00:04:38.216 "keep_alive_timeout_ms": 10000, 00:04:38.216 "arbitration_burst": 0, 00:04:38.216 "low_priority_weight": 0, 00:04:38.216 "medium_priority_weight": 0, 00:04:38.216 "high_priority_weight": 0, 00:04:38.216 "nvme_adminq_poll_period_us": 10000, 00:04:38.216 "nvme_ioq_poll_period_us": 0, 00:04:38.216 "io_queue_requests": 0, 00:04:38.216 "delay_cmd_submit": true, 00:04:38.216 "transport_retry_count": 4, 00:04:38.216 "bdev_retry_count": 3, 00:04:38.216 "transport_ack_timeout": 0, 00:04:38.216 "ctrlr_loss_timeout_sec": 0, 00:04:38.216 "reconnect_delay_sec": 0, 00:04:38.216 "fast_io_fail_timeout_sec": 0, 00:04:38.216 "disable_auto_failback": false, 00:04:38.216 "generate_uuids": false, 00:04:38.216 "transport_tos": 0, 00:04:38.216 "nvme_error_stat": false, 00:04:38.216 "rdma_srq_size": 0, 00:04:38.216 "io_path_stat": false, 00:04:38.216 "allow_accel_sequence": false, 00:04:38.216 "rdma_max_cq_size": 0, 00:04:38.216 "rdma_cm_event_timeout_ms": 0, 00:04:38.216 "dhchap_digests": [ 00:04:38.216 "sha256", 00:04:38.216 "sha384", 00:04:38.216 "sha512" 00:04:38.216 ], 00:04:38.216 "dhchap_dhgroups": [ 00:04:38.216 "null", 00:04:38.216 "ffdhe2048", 00:04:38.216 "ffdhe3072", 00:04:38.216 "ffdhe4096", 00:04:38.216 "ffdhe6144", 00:04:38.216 "ffdhe8192" 00:04:38.216 ] 00:04:38.216 } 00:04:38.216 }, 00:04:38.216 { 00:04:38.216 "method": "bdev_nvme_set_hotplug", 00:04:38.216 "params": { 00:04:38.216 "period_us": 100000, 00:04:38.216 "enable": false 00:04:38.216 } 00:04:38.216 }, 00:04:38.216 { 00:04:38.216 "method": "bdev_wait_for_examine" 00:04:38.216 } 00:04:38.216 ] 00:04:38.216 }, 00:04:38.216 { 00:04:38.216 "subsystem": "scsi", 00:04:38.216 "config": null 00:04:38.216 }, 00:04:38.216 { 00:04:38.216 "subsystem": "scheduler", 00:04:38.216 "config": [ 00:04:38.216 { 00:04:38.216 "method": "framework_set_scheduler", 00:04:38.216 "params": { 00:04:38.216 "name": "static" 00:04:38.216 } 00:04:38.216 } 00:04:38.216 ] 00:04:38.216 }, 00:04:38.216 { 00:04:38.216 "subsystem": "vhost_scsi", 00:04:38.216 "config": [] 00:04:38.216 }, 00:04:38.216 { 00:04:38.216 "subsystem": "vhost_blk", 00:04:38.216 "config": [] 00:04:38.216 }, 00:04:38.216 { 00:04:38.216 "subsystem": "ublk", 00:04:38.216 "config": [] 00:04:38.216 }, 00:04:38.216 { 00:04:38.216 "subsystem": "nbd", 00:04:38.216 "config": [] 00:04:38.216 }, 00:04:38.216 { 00:04:38.216 "subsystem": "nvmf", 00:04:38.216 "config": [ 00:04:38.216 { 00:04:38.216 "method": "nvmf_set_config", 00:04:38.216 "params": { 00:04:38.216 "discovery_filter": "match_any", 00:04:38.216 "admin_cmd_passthru": { 00:04:38.216 "identify_ctrlr": false 00:04:38.216 }, 00:04:38.216 "dhchap_digests": [ 00:04:38.216 "sha256", 00:04:38.216 "sha384", 00:04:38.216 "sha512" 00:04:38.216 ], 00:04:38.216 "dhchap_dhgroups": [ 00:04:38.216 "null", 00:04:38.216 "ffdhe2048", 00:04:38.216 "ffdhe3072", 00:04:38.216 "ffdhe4096", 00:04:38.216 "ffdhe6144", 00:04:38.216 "ffdhe8192" 00:04:38.216 ] 00:04:38.216 } 00:04:38.216 }, 00:04:38.216 { 00:04:38.216 "method": "nvmf_set_max_subsystems", 00:04:38.216 "params": { 00:04:38.216 "max_subsystems": 1024 00:04:38.216 } 00:04:38.216 }, 00:04:38.216 { 00:04:38.216 "method": "nvmf_set_crdt", 00:04:38.216 "params": { 00:04:38.216 "crdt1": 0, 00:04:38.216 "crdt2": 0, 00:04:38.216 "crdt3": 0 00:04:38.216 } 00:04:38.216 }, 00:04:38.216 { 00:04:38.216 "method": "nvmf_create_transport", 00:04:38.216 "params": { 00:04:38.216 "trtype": "TCP", 00:04:38.216 "max_queue_depth": 128, 00:04:38.216 "max_io_qpairs_per_ctrlr": 127, 00:04:38.216 "in_capsule_data_size": 4096, 00:04:38.216 "max_io_size": 131072, 00:04:38.216 "io_unit_size": 131072, 00:04:38.216 "max_aq_depth": 128, 00:04:38.216 "num_shared_buffers": 511, 00:04:38.216 "buf_cache_size": 4294967295, 00:04:38.216 "dif_insert_or_strip": false, 00:04:38.216 "zcopy": false, 00:04:38.216 "c2h_success": true, 00:04:38.216 "sock_priority": 0, 00:04:38.216 "abort_timeout_sec": 1, 00:04:38.216 "ack_timeout": 0, 00:04:38.216 "data_wr_pool_size": 0 00:04:38.216 } 00:04:38.216 } 00:04:38.216 ] 00:04:38.216 }, 00:04:38.216 { 00:04:38.216 "subsystem": "iscsi", 00:04:38.216 "config": [ 00:04:38.216 { 00:04:38.216 "method": "iscsi_set_options", 00:04:38.216 "params": { 00:04:38.216 "node_base": "iqn.2016-06.io.spdk", 00:04:38.216 "max_sessions": 128, 00:04:38.216 "max_connections_per_session": 2, 00:04:38.216 "max_queue_depth": 64, 00:04:38.216 "default_time2wait": 2, 00:04:38.216 "default_time2retain": 20, 00:04:38.216 "first_burst_length": 8192, 00:04:38.216 "immediate_data": true, 00:04:38.216 "allow_duplicated_isid": false, 00:04:38.216 "error_recovery_level": 0, 00:04:38.216 "nop_timeout": 60, 00:04:38.216 "nop_in_interval": 30, 00:04:38.216 "disable_chap": false, 00:04:38.216 "require_chap": false, 00:04:38.216 "mutual_chap": false, 00:04:38.216 "chap_group": 0, 00:04:38.216 "max_large_datain_per_connection": 64, 00:04:38.216 "max_r2t_per_connection": 4, 00:04:38.216 "pdu_pool_size": 36864, 00:04:38.216 "immediate_data_pool_size": 16384, 00:04:38.216 "data_out_pool_size": 2048 00:04:38.216 } 00:04:38.216 } 00:04:38.216 ] 00:04:38.216 } 00:04:38.216 ] 00:04:38.216 } 00:04:38.216 10:35:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:38.216 10:35:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 71046 00:04:38.216 10:35:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 71046 ']' 00:04:38.216 10:35:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 71046 00:04:38.216 10:35:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:38.216 10:35:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:38.216 10:35:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71046 00:04:38.216 killing process with pid 71046 00:04:38.216 10:35:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:38.216 10:35:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:38.216 10:35:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71046' 00:04:38.216 10:35:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 71046 00:04:38.216 10:35:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 71046 00:04:38.478 10:35:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=71074 00:04:38.478 10:35:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:38.478 10:35:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:43.760 10:36:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 71074 00:04:43.760 10:36:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 71074 ']' 00:04:43.760 10:36:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 71074 00:04:43.760 10:36:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:43.760 10:36:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:43.760 10:36:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71074 00:04:43.760 killing process with pid 71074 00:04:43.760 10:36:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:43.760 10:36:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:43.760 10:36:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71074' 00:04:43.760 10:36:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 71074 00:04:43.760 10:36:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 71074 00:04:43.760 10:36:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:43.760 10:36:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:43.760 00:04:43.760 real 0m6.658s 00:04:43.760 user 0m6.241s 00:04:43.760 sys 0m0.639s 00:04:43.760 10:36:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.760 10:36:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:43.760 ************************************ 00:04:43.760 END TEST skip_rpc_with_json 00:04:43.760 ************************************ 00:04:43.760 10:36:04 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:43.760 10:36:04 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.760 10:36:04 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.760 10:36:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.760 ************************************ 00:04:43.760 START TEST skip_rpc_with_delay 00:04:43.760 ************************************ 00:04:43.760 10:36:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:43.760 10:36:04 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:43.760 10:36:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:43.760 10:36:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:43.760 10:36:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:43.760 10:36:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:43.760 10:36:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:43.760 10:36:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:43.760 10:36:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:43.760 10:36:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:43.760 10:36:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:43.760 10:36:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:43.760 10:36:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:44.021 [2024-10-08 10:36:04.381607] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:44.021 [2024-10-08 10:36:04.381741] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:44.021 10:36:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:44.021 10:36:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:44.021 10:36:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:44.021 10:36:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:44.021 00:04:44.021 real 0m0.119s 00:04:44.021 user 0m0.063s 00:04:44.021 sys 0m0.054s 00:04:44.021 ************************************ 00:04:44.021 END TEST skip_rpc_with_delay 00:04:44.021 ************************************ 00:04:44.021 10:36:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.021 10:36:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:44.021 10:36:04 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:44.021 10:36:04 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:44.021 10:36:04 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:44.021 10:36:04 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:44.021 10:36:04 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.021 10:36:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.021 ************************************ 00:04:44.021 START TEST exit_on_failed_rpc_init 00:04:44.021 ************************************ 00:04:44.021 10:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:44.022 10:36:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=71181 00:04:44.022 10:36:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 71181 00:04:44.022 10:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 71181 ']' 00:04:44.022 10:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.022 10:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:44.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.022 10:36:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:44.022 10:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.022 10:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:44.022 10:36:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:44.022 [2024-10-08 10:36:04.573175] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:04:44.022 [2024-10-08 10:36:04.573307] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71181 ] 00:04:44.283 [2024-10-08 10:36:04.706071] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:44.283 [2024-10-08 10:36:04.722981] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.283 [2024-10-08 10:36:04.773064] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.853 10:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:44.853 10:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:44.853 10:36:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.853 10:36:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:44.853 10:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:44.853 10:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:44.853 10:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:44.853 10:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:44.853 10:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:44.853 10:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:44.853 10:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:44.853 10:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:44.853 10:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:44.853 10:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:44.853 10:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:45.113 [2024-10-08 10:36:05.501614] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:04:45.113 [2024-10-08 10:36:05.501767] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71199 ] 00:04:45.113 [2024-10-08 10:36:05.633446] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:45.113 [2024-10-08 10:36:05.656213] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.374 [2024-10-08 10:36:05.705851] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.374 [2024-10-08 10:36:05.705953] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:45.374 [2024-10-08 10:36:05.705970] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:45.374 [2024-10-08 10:36:05.705982] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:45.374 10:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:45.374 10:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:45.374 10:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:45.374 10:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:45.374 10:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:45.374 10:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:45.374 10:36:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:45.374 10:36:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 71181 00:04:45.374 10:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 71181 ']' 00:04:45.374 10:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 71181 00:04:45.374 10:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:45.374 10:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:45.374 10:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71181 00:04:45.374 killing process with pid 71181 00:04:45.374 10:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:45.374 10:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:45.374 10:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71181' 00:04:45.374 10:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 71181 00:04:45.374 10:36:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 71181 00:04:45.635 ************************************ 00:04:45.635 END TEST exit_on_failed_rpc_init 00:04:45.635 ************************************ 00:04:45.635 00:04:45.635 real 0m1.709s 00:04:45.635 user 0m1.803s 00:04:45.635 sys 0m0.514s 00:04:45.635 10:36:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.635 10:36:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:45.914 10:36:06 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:45.914 ************************************ 00:04:45.914 END TEST skip_rpc 00:04:45.914 ************************************ 00:04:45.914 00:04:45.914 real 0m14.153s 00:04:45.914 user 0m13.115s 00:04:45.914 sys 0m1.699s 00:04:45.914 10:36:06 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.914 10:36:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.914 10:36:06 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:45.914 10:36:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.914 10:36:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.914 10:36:06 -- common/autotest_common.sh@10 -- # set +x 00:04:45.914 ************************************ 00:04:45.914 START TEST rpc_client 00:04:45.914 ************************************ 00:04:45.914 10:36:06 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:45.914 * Looking for test storage... 00:04:45.914 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:45.914 10:36:06 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:45.914 10:36:06 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:45.914 10:36:06 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:04:45.914 10:36:06 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:45.914 10:36:06 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.914 10:36:06 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.914 10:36:06 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.914 10:36:06 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.914 10:36:06 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.914 10:36:06 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.914 10:36:06 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.914 10:36:06 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.914 10:36:06 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.914 10:36:06 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.914 10:36:06 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.914 10:36:06 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:45.914 10:36:06 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:45.914 10:36:06 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.914 10:36:06 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.914 10:36:06 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:45.914 10:36:06 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:45.914 10:36:06 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.914 10:36:06 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:45.914 10:36:06 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.914 10:36:06 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:45.914 10:36:06 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:45.914 10:36:06 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.914 10:36:06 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:45.914 10:36:06 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.914 10:36:06 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.914 10:36:06 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.914 10:36:06 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:45.914 10:36:06 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.914 10:36:06 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:45.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.914 --rc genhtml_branch_coverage=1 00:04:45.914 --rc genhtml_function_coverage=1 00:04:45.914 --rc genhtml_legend=1 00:04:45.914 --rc geninfo_all_blocks=1 00:04:45.914 --rc geninfo_unexecuted_blocks=1 00:04:45.914 00:04:45.914 ' 00:04:45.914 10:36:06 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:45.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.914 --rc genhtml_branch_coverage=1 00:04:45.914 --rc genhtml_function_coverage=1 00:04:45.914 --rc genhtml_legend=1 00:04:45.914 --rc geninfo_all_blocks=1 00:04:45.914 --rc geninfo_unexecuted_blocks=1 00:04:45.914 00:04:45.914 ' 00:04:45.914 10:36:06 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:45.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.914 --rc genhtml_branch_coverage=1 00:04:45.914 --rc genhtml_function_coverage=1 00:04:45.914 --rc genhtml_legend=1 00:04:45.914 --rc geninfo_all_blocks=1 00:04:45.914 --rc geninfo_unexecuted_blocks=1 00:04:45.914 00:04:45.914 ' 00:04:46.176 10:36:06 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:46.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.176 --rc genhtml_branch_coverage=1 00:04:46.176 --rc genhtml_function_coverage=1 00:04:46.176 --rc genhtml_legend=1 00:04:46.176 --rc geninfo_all_blocks=1 00:04:46.176 --rc geninfo_unexecuted_blocks=1 00:04:46.176 00:04:46.176 ' 00:04:46.176 10:36:06 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:46.176 OK 00:04:46.176 10:36:06 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:46.176 00:04:46.176 real 0m0.203s 00:04:46.176 user 0m0.119s 00:04:46.176 sys 0m0.087s 00:04:46.176 10:36:06 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.176 10:36:06 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:46.176 ************************************ 00:04:46.176 END TEST rpc_client 00:04:46.176 ************************************ 00:04:46.176 10:36:06 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:46.176 10:36:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.176 10:36:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.176 10:36:06 -- common/autotest_common.sh@10 -- # set +x 00:04:46.177 ************************************ 00:04:46.177 START TEST json_config 00:04:46.177 ************************************ 00:04:46.177 10:36:06 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:46.177 10:36:06 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:46.177 10:36:06 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:46.177 10:36:06 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:04:46.177 10:36:06 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:46.177 10:36:06 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:46.177 10:36:06 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:46.177 10:36:06 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:46.177 10:36:06 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.177 10:36:06 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:46.177 10:36:06 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:46.177 10:36:06 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:46.177 10:36:06 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:46.177 10:36:06 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:46.177 10:36:06 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:46.177 10:36:06 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:46.177 10:36:06 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:46.177 10:36:06 json_config -- scripts/common.sh@345 -- # : 1 00:04:46.177 10:36:06 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:46.177 10:36:06 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.177 10:36:06 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:46.177 10:36:06 json_config -- scripts/common.sh@353 -- # local d=1 00:04:46.177 10:36:06 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.177 10:36:06 json_config -- scripts/common.sh@355 -- # echo 1 00:04:46.177 10:36:06 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:46.177 10:36:06 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:46.177 10:36:06 json_config -- scripts/common.sh@353 -- # local d=2 00:04:46.177 10:36:06 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.177 10:36:06 json_config -- scripts/common.sh@355 -- # echo 2 00:04:46.177 10:36:06 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:46.177 10:36:06 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:46.177 10:36:06 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:46.177 10:36:06 json_config -- scripts/common.sh@368 -- # return 0 00:04:46.177 10:36:06 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.177 10:36:06 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:46.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.177 --rc genhtml_branch_coverage=1 00:04:46.177 --rc genhtml_function_coverage=1 00:04:46.177 --rc genhtml_legend=1 00:04:46.177 --rc geninfo_all_blocks=1 00:04:46.177 --rc geninfo_unexecuted_blocks=1 00:04:46.177 00:04:46.177 ' 00:04:46.177 10:36:06 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:46.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.177 --rc genhtml_branch_coverage=1 00:04:46.177 --rc genhtml_function_coverage=1 00:04:46.177 --rc genhtml_legend=1 00:04:46.177 --rc geninfo_all_blocks=1 00:04:46.177 --rc geninfo_unexecuted_blocks=1 00:04:46.177 00:04:46.177 ' 00:04:46.177 10:36:06 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:46.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.177 --rc genhtml_branch_coverage=1 00:04:46.177 --rc genhtml_function_coverage=1 00:04:46.177 --rc genhtml_legend=1 00:04:46.177 --rc geninfo_all_blocks=1 00:04:46.177 --rc geninfo_unexecuted_blocks=1 00:04:46.177 00:04:46.177 ' 00:04:46.177 10:36:06 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:46.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.177 --rc genhtml_branch_coverage=1 00:04:46.177 --rc genhtml_function_coverage=1 00:04:46.177 --rc genhtml_legend=1 00:04:46.177 --rc geninfo_all_blocks=1 00:04:46.177 --rc geninfo_unexecuted_blocks=1 00:04:46.177 00:04:46.177 ' 00:04:46.177 10:36:06 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:46.177 10:36:06 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:46.177 10:36:06 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:46.177 10:36:06 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:46.177 10:36:06 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:46.177 10:36:06 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:46.177 10:36:06 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:46.177 10:36:06 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:46.177 10:36:06 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:46.177 10:36:06 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:46.177 10:36:06 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:46.177 10:36:06 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:46.177 10:36:06 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85329bc7-ddd3-4e8b-9a4d-f786d01c4aeb 00:04:46.177 10:36:06 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=85329bc7-ddd3-4e8b-9a4d-f786d01c4aeb 00:04:46.177 10:36:06 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:46.177 10:36:06 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:46.177 10:36:06 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:46.177 10:36:06 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:46.177 10:36:06 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:46.177 10:36:06 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:46.177 10:36:06 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:46.177 10:36:06 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:46.177 10:36:06 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:46.177 10:36:06 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.177 10:36:06 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.177 10:36:06 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.177 10:36:06 json_config -- paths/export.sh@5 -- # export PATH 00:04:46.177 10:36:06 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.177 10:36:06 json_config -- nvmf/common.sh@51 -- # : 0 00:04:46.177 10:36:06 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:46.177 10:36:06 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:46.177 10:36:06 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:46.177 10:36:06 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:46.177 10:36:06 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:46.177 10:36:06 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:46.177 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:46.177 10:36:06 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:46.177 10:36:06 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:46.177 10:36:06 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:46.177 10:36:06 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:46.177 WARNING: No tests are enabled so not running JSON configuration tests 00:04:46.177 10:36:06 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:46.177 10:36:06 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:46.177 10:36:06 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:46.177 10:36:06 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:46.177 10:36:06 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:46.177 10:36:06 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:46.177 00:04:46.177 real 0m0.129s 00:04:46.177 user 0m0.076s 00:04:46.177 sys 0m0.054s 00:04:46.177 ************************************ 00:04:46.177 END TEST json_config 00:04:46.177 ************************************ 00:04:46.177 10:36:06 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.177 10:36:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.177 10:36:06 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:46.177 10:36:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.177 10:36:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.177 10:36:06 -- common/autotest_common.sh@10 -- # set +x 00:04:46.438 ************************************ 00:04:46.438 START TEST json_config_extra_key 00:04:46.438 ************************************ 00:04:46.438 10:36:06 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:46.438 10:36:06 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:46.438 10:36:06 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:46.438 10:36:06 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:04:46.438 10:36:06 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:46.438 10:36:06 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:46.438 10:36:06 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:46.438 10:36:06 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:46.438 10:36:06 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.438 10:36:06 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:46.438 10:36:06 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:46.438 10:36:06 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:46.438 10:36:06 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:46.438 10:36:06 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:46.438 10:36:06 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:46.438 10:36:06 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:46.438 10:36:06 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:46.438 10:36:06 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:46.438 10:36:06 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:46.438 10:36:06 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.438 10:36:06 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:46.438 10:36:06 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:46.438 10:36:06 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.438 10:36:06 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:46.438 10:36:06 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:46.438 10:36:06 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:46.438 10:36:06 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:46.438 10:36:06 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.438 10:36:06 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:46.438 10:36:06 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:46.438 10:36:06 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:46.438 10:36:06 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:46.438 10:36:06 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:46.438 10:36:06 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.438 10:36:06 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:46.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.438 --rc genhtml_branch_coverage=1 00:04:46.438 --rc genhtml_function_coverage=1 00:04:46.438 --rc genhtml_legend=1 00:04:46.438 --rc geninfo_all_blocks=1 00:04:46.438 --rc geninfo_unexecuted_blocks=1 00:04:46.438 00:04:46.438 ' 00:04:46.438 10:36:06 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:46.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.438 --rc genhtml_branch_coverage=1 00:04:46.438 --rc genhtml_function_coverage=1 00:04:46.438 --rc genhtml_legend=1 00:04:46.438 --rc geninfo_all_blocks=1 00:04:46.438 --rc geninfo_unexecuted_blocks=1 00:04:46.438 00:04:46.438 ' 00:04:46.438 10:36:06 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:46.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.438 --rc genhtml_branch_coverage=1 00:04:46.438 --rc genhtml_function_coverage=1 00:04:46.438 --rc genhtml_legend=1 00:04:46.438 --rc geninfo_all_blocks=1 00:04:46.438 --rc geninfo_unexecuted_blocks=1 00:04:46.438 00:04:46.438 ' 00:04:46.438 10:36:06 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:46.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.438 --rc genhtml_branch_coverage=1 00:04:46.438 --rc genhtml_function_coverage=1 00:04:46.438 --rc genhtml_legend=1 00:04:46.438 --rc geninfo_all_blocks=1 00:04:46.438 --rc geninfo_unexecuted_blocks=1 00:04:46.438 00:04:46.438 ' 00:04:46.438 10:36:06 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:46.438 10:36:06 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:46.438 10:36:06 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:46.438 10:36:06 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:46.438 10:36:06 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:46.438 10:36:06 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:46.438 10:36:06 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:46.438 10:36:06 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:46.438 10:36:06 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:46.438 10:36:06 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:46.438 10:36:06 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:46.438 10:36:06 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:46.438 10:36:06 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:85329bc7-ddd3-4e8b-9a4d-f786d01c4aeb 00:04:46.438 10:36:06 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=85329bc7-ddd3-4e8b-9a4d-f786d01c4aeb 00:04:46.438 10:36:06 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:46.438 10:36:06 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:46.438 10:36:06 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:46.438 10:36:06 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:46.438 10:36:06 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:46.438 10:36:06 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:46.438 10:36:06 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:46.438 10:36:06 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:46.438 10:36:06 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:46.439 10:36:06 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.439 10:36:06 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.439 10:36:06 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.439 10:36:06 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:46.439 10:36:06 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.439 10:36:06 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:46.439 10:36:06 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:46.439 10:36:06 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:46.439 10:36:06 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:46.439 10:36:06 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:46.439 10:36:06 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:46.439 10:36:06 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:46.439 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:46.439 10:36:06 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:46.439 10:36:06 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:46.439 10:36:06 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:46.439 10:36:06 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:46.439 10:36:06 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:46.439 10:36:06 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:46.439 10:36:06 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:46.439 10:36:06 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:46.439 10:36:06 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:46.439 10:36:06 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:46.439 10:36:06 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:46.439 INFO: launching applications... 00:04:46.439 10:36:06 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:46.439 10:36:06 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:46.439 10:36:06 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:46.439 10:36:06 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:46.439 10:36:06 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:46.439 10:36:06 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:46.439 10:36:06 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:46.439 10:36:06 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:46.439 10:36:06 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:46.439 10:36:06 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:46.439 10:36:06 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:46.439 10:36:06 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=71381 00:04:46.439 10:36:06 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:46.439 Waiting for target to run... 00:04:46.439 10:36:06 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 71381 /var/tmp/spdk_tgt.sock 00:04:46.439 10:36:06 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 71381 ']' 00:04:46.439 10:36:06 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:46.439 10:36:06 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:46.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:46.439 10:36:06 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:46.439 10:36:06 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:46.439 10:36:06 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:46.439 10:36:06 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:46.439 [2024-10-08 10:36:06.977142] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:04:46.439 [2024-10-08 10:36:06.977541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71381 ] 00:04:46.738 [2024-10-08 10:36:07.259078] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:46.738 [2024-10-08 10:36:07.280663] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.000 [2024-10-08 10:36:07.301023] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.261 10:36:07 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:47.261 00:04:47.261 10:36:07 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:47.261 10:36:07 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:47.261 INFO: shutting down applications... 00:04:47.261 10:36:07 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:47.261 10:36:07 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:47.261 10:36:07 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:47.261 10:36:07 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:47.261 10:36:07 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 71381 ]] 00:04:47.261 10:36:07 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 71381 00:04:47.261 10:36:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:47.261 10:36:07 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.261 10:36:07 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71381 00:04:47.261 10:36:07 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:47.834 10:36:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:47.834 10:36:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.834 10:36:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71381 00:04:47.834 10:36:08 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:47.834 10:36:08 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:47.834 10:36:08 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:47.834 10:36:08 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:47.834 SPDK target shutdown done 00:04:47.834 Success 00:04:47.834 10:36:08 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:47.834 00:04:47.834 real 0m1.582s 00:04:47.834 user 0m1.392s 00:04:47.834 sys 0m0.369s 00:04:47.834 ************************************ 00:04:47.834 10:36:08 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:47.834 10:36:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:47.834 END TEST json_config_extra_key 00:04:47.834 ************************************ 00:04:47.834 10:36:08 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:47.834 10:36:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:47.834 10:36:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:47.834 10:36:08 -- common/autotest_common.sh@10 -- # set +x 00:04:47.834 ************************************ 00:04:47.834 START TEST alias_rpc 00:04:47.834 ************************************ 00:04:47.834 10:36:08 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:48.097 * Looking for test storage... 00:04:48.097 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:48.097 10:36:08 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:48.097 10:36:08 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:48.097 10:36:08 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:48.097 10:36:08 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:48.097 10:36:08 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.097 10:36:08 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.097 10:36:08 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.097 10:36:08 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.097 10:36:08 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.097 10:36:08 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.097 10:36:08 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.097 10:36:08 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.097 10:36:08 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.097 10:36:08 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.097 10:36:08 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.097 10:36:08 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:48.097 10:36:08 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:48.097 10:36:08 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.097 10:36:08 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.097 10:36:08 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:48.097 10:36:08 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:48.097 10:36:08 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.097 10:36:08 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:48.097 10:36:08 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.097 10:36:08 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:48.097 10:36:08 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:48.097 10:36:08 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.097 10:36:08 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:48.097 10:36:08 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.097 10:36:08 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.097 10:36:08 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.097 10:36:08 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:48.097 10:36:08 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.097 10:36:08 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:48.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.097 --rc genhtml_branch_coverage=1 00:04:48.097 --rc genhtml_function_coverage=1 00:04:48.097 --rc genhtml_legend=1 00:04:48.097 --rc geninfo_all_blocks=1 00:04:48.097 --rc geninfo_unexecuted_blocks=1 00:04:48.097 00:04:48.097 ' 00:04:48.097 10:36:08 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:48.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.097 --rc genhtml_branch_coverage=1 00:04:48.097 --rc genhtml_function_coverage=1 00:04:48.097 --rc genhtml_legend=1 00:04:48.097 --rc geninfo_all_blocks=1 00:04:48.097 --rc geninfo_unexecuted_blocks=1 00:04:48.097 00:04:48.097 ' 00:04:48.097 10:36:08 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:48.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.097 --rc genhtml_branch_coverage=1 00:04:48.097 --rc genhtml_function_coverage=1 00:04:48.097 --rc genhtml_legend=1 00:04:48.097 --rc geninfo_all_blocks=1 00:04:48.097 --rc geninfo_unexecuted_blocks=1 00:04:48.097 00:04:48.097 ' 00:04:48.097 10:36:08 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:48.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.097 --rc genhtml_branch_coverage=1 00:04:48.097 --rc genhtml_function_coverage=1 00:04:48.097 --rc genhtml_legend=1 00:04:48.097 --rc geninfo_all_blocks=1 00:04:48.097 --rc geninfo_unexecuted_blocks=1 00:04:48.097 00:04:48.097 ' 00:04:48.097 10:36:08 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:48.097 10:36:08 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=71455 00:04:48.097 10:36:08 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 71455 00:04:48.097 10:36:08 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 71455 ']' 00:04:48.097 10:36:08 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.097 10:36:08 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:48.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.097 10:36:08 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.097 10:36:08 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.097 10:36:08 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:48.097 10:36:08 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.097 [2024-10-08 10:36:08.635993] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:04:48.097 [2024-10-08 10:36:08.636155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71455 ] 00:04:48.359 [2024-10-08 10:36:08.768959] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:48.359 [2024-10-08 10:36:08.788934] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.359 [2024-10-08 10:36:08.838733] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.933 10:36:09 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:48.933 10:36:09 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:48.933 10:36:09 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:49.194 10:36:09 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 71455 00:04:49.194 10:36:09 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 71455 ']' 00:04:49.194 10:36:09 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 71455 00:04:49.194 10:36:09 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:49.194 10:36:09 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:49.194 10:36:09 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71455 00:04:49.194 10:36:09 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:49.194 killing process with pid 71455 00:04:49.194 10:36:09 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:49.194 10:36:09 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71455' 00:04:49.194 10:36:09 alias_rpc -- common/autotest_common.sh@969 -- # kill 71455 00:04:49.194 10:36:09 alias_rpc -- common/autotest_common.sh@974 -- # wait 71455 00:04:49.768 00:04:49.768 real 0m1.695s 00:04:49.768 user 0m1.735s 00:04:49.768 sys 0m0.479s 00:04:49.768 10:36:10 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.768 ************************************ 00:04:49.768 END TEST alias_rpc 00:04:49.768 ************************************ 00:04:49.768 10:36:10 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.768 10:36:10 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:49.768 10:36:10 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:49.768 10:36:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.768 10:36:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.768 10:36:10 -- common/autotest_common.sh@10 -- # set +x 00:04:49.768 ************************************ 00:04:49.768 START TEST spdkcli_tcp 00:04:49.768 ************************************ 00:04:49.768 10:36:10 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:49.768 * Looking for test storage... 00:04:49.768 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:49.768 10:36:10 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:49.768 10:36:10 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:04:49.768 10:36:10 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:49.768 10:36:10 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:49.768 10:36:10 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.768 10:36:10 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.768 10:36:10 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.768 10:36:10 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.768 10:36:10 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.768 10:36:10 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.768 10:36:10 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.769 10:36:10 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.769 10:36:10 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.769 10:36:10 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.769 10:36:10 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.769 10:36:10 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:49.769 10:36:10 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:49.769 10:36:10 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.769 10:36:10 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.769 10:36:10 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:49.769 10:36:10 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:49.769 10:36:10 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.769 10:36:10 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:49.769 10:36:10 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.769 10:36:10 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:49.769 10:36:10 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:49.769 10:36:10 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.769 10:36:10 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:49.769 10:36:10 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.769 10:36:10 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.769 10:36:10 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.769 10:36:10 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:49.769 10:36:10 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.769 10:36:10 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:49.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.769 --rc genhtml_branch_coverage=1 00:04:49.769 --rc genhtml_function_coverage=1 00:04:49.769 --rc genhtml_legend=1 00:04:49.769 --rc geninfo_all_blocks=1 00:04:49.769 --rc geninfo_unexecuted_blocks=1 00:04:49.769 00:04:49.769 ' 00:04:49.769 10:36:10 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:49.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.769 --rc genhtml_branch_coverage=1 00:04:49.769 --rc genhtml_function_coverage=1 00:04:49.769 --rc genhtml_legend=1 00:04:49.769 --rc geninfo_all_blocks=1 00:04:49.769 --rc geninfo_unexecuted_blocks=1 00:04:49.769 00:04:49.769 ' 00:04:49.769 10:36:10 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:49.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.769 --rc genhtml_branch_coverage=1 00:04:49.769 --rc genhtml_function_coverage=1 00:04:49.769 --rc genhtml_legend=1 00:04:49.769 --rc geninfo_all_blocks=1 00:04:49.769 --rc geninfo_unexecuted_blocks=1 00:04:49.769 00:04:49.769 ' 00:04:49.769 10:36:10 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:49.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.769 --rc genhtml_branch_coverage=1 00:04:49.769 --rc genhtml_function_coverage=1 00:04:49.769 --rc genhtml_legend=1 00:04:49.769 --rc geninfo_all_blocks=1 00:04:49.769 --rc geninfo_unexecuted_blocks=1 00:04:49.769 00:04:49.769 ' 00:04:49.769 10:36:10 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:49.769 10:36:10 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:49.769 10:36:10 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:49.769 10:36:10 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:49.769 10:36:10 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:49.769 10:36:10 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:49.769 10:36:10 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:49.769 10:36:10 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:49.769 10:36:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:49.769 10:36:10 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=71540 00:04:49.769 10:36:10 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 71540 00:04:49.769 10:36:10 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 71540 ']' 00:04:49.769 10:36:10 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:49.769 10:36:10 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.769 10:36:10 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:49.769 10:36:10 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.769 10:36:10 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:49.769 10:36:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:50.030 [2024-10-08 10:36:10.401649] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:04:50.030 [2024-10-08 10:36:10.401813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71540 ] 00:04:50.030 [2024-10-08 10:36:10.535748] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:50.030 [2024-10-08 10:36:10.553517] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:50.030 [2024-10-08 10:36:10.604816] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.030 [2024-10-08 10:36:10.605180] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.973 10:36:11 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:50.973 10:36:11 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:50.973 10:36:11 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=71556 00:04:50.973 10:36:11 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:50.973 10:36:11 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:50.973 [ 00:04:50.973 "bdev_malloc_delete", 00:04:50.973 "bdev_malloc_create", 00:04:50.973 "bdev_null_resize", 00:04:50.973 "bdev_null_delete", 00:04:50.973 "bdev_null_create", 00:04:50.973 "bdev_nvme_cuse_unregister", 00:04:50.973 "bdev_nvme_cuse_register", 00:04:50.973 "bdev_opal_new_user", 00:04:50.973 "bdev_opal_set_lock_state", 00:04:50.973 "bdev_opal_delete", 00:04:50.973 "bdev_opal_get_info", 00:04:50.973 "bdev_opal_create", 00:04:50.973 "bdev_nvme_opal_revert", 00:04:50.973 "bdev_nvme_opal_init", 00:04:50.973 "bdev_nvme_send_cmd", 00:04:50.973 "bdev_nvme_set_keys", 00:04:50.973 "bdev_nvme_get_path_iostat", 00:04:50.973 "bdev_nvme_get_mdns_discovery_info", 00:04:50.973 "bdev_nvme_stop_mdns_discovery", 00:04:50.973 "bdev_nvme_start_mdns_discovery", 00:04:50.973 "bdev_nvme_set_multipath_policy", 00:04:50.973 "bdev_nvme_set_preferred_path", 00:04:50.973 "bdev_nvme_get_io_paths", 00:04:50.973 "bdev_nvme_remove_error_injection", 00:04:50.973 "bdev_nvme_add_error_injection", 00:04:50.973 "bdev_nvme_get_discovery_info", 00:04:50.973 "bdev_nvme_stop_discovery", 00:04:50.973 "bdev_nvme_start_discovery", 00:04:50.973 "bdev_nvme_get_controller_health_info", 00:04:50.973 "bdev_nvme_disable_controller", 00:04:50.973 "bdev_nvme_enable_controller", 00:04:50.973 "bdev_nvme_reset_controller", 00:04:50.973 "bdev_nvme_get_transport_statistics", 00:04:50.973 "bdev_nvme_apply_firmware", 00:04:50.973 "bdev_nvme_detach_controller", 00:04:50.973 "bdev_nvme_get_controllers", 00:04:50.973 "bdev_nvme_attach_controller", 00:04:50.973 "bdev_nvme_set_hotplug", 00:04:50.973 "bdev_nvme_set_options", 00:04:50.973 "bdev_passthru_delete", 00:04:50.973 "bdev_passthru_create", 00:04:50.973 "bdev_lvol_set_parent_bdev", 00:04:50.973 "bdev_lvol_set_parent", 00:04:50.973 "bdev_lvol_check_shallow_copy", 00:04:50.973 "bdev_lvol_start_shallow_copy", 00:04:50.973 "bdev_lvol_grow_lvstore", 00:04:50.973 "bdev_lvol_get_lvols", 00:04:50.973 "bdev_lvol_get_lvstores", 00:04:50.973 "bdev_lvol_delete", 00:04:50.973 "bdev_lvol_set_read_only", 00:04:50.973 "bdev_lvol_resize", 00:04:50.973 "bdev_lvol_decouple_parent", 00:04:50.973 "bdev_lvol_inflate", 00:04:50.973 "bdev_lvol_rename", 00:04:50.973 "bdev_lvol_clone_bdev", 00:04:50.973 "bdev_lvol_clone", 00:04:50.973 "bdev_lvol_snapshot", 00:04:50.973 "bdev_lvol_create", 00:04:50.973 "bdev_lvol_delete_lvstore", 00:04:50.973 "bdev_lvol_rename_lvstore", 00:04:50.974 "bdev_lvol_create_lvstore", 00:04:50.974 "bdev_raid_set_options", 00:04:50.974 "bdev_raid_remove_base_bdev", 00:04:50.974 "bdev_raid_add_base_bdev", 00:04:50.974 "bdev_raid_delete", 00:04:50.974 "bdev_raid_create", 00:04:50.974 "bdev_raid_get_bdevs", 00:04:50.974 "bdev_error_inject_error", 00:04:50.974 "bdev_error_delete", 00:04:50.974 "bdev_error_create", 00:04:50.974 "bdev_split_delete", 00:04:50.974 "bdev_split_create", 00:04:50.974 "bdev_delay_delete", 00:04:50.974 "bdev_delay_create", 00:04:50.974 "bdev_delay_update_latency", 00:04:50.974 "bdev_zone_block_delete", 00:04:50.974 "bdev_zone_block_create", 00:04:50.974 "blobfs_create", 00:04:50.974 "blobfs_detect", 00:04:50.974 "blobfs_set_cache_size", 00:04:50.974 "bdev_xnvme_delete", 00:04:50.974 "bdev_xnvme_create", 00:04:50.974 "bdev_aio_delete", 00:04:50.974 "bdev_aio_rescan", 00:04:50.974 "bdev_aio_create", 00:04:50.974 "bdev_ftl_set_property", 00:04:50.974 "bdev_ftl_get_properties", 00:04:50.974 "bdev_ftl_get_stats", 00:04:50.974 "bdev_ftl_unmap", 00:04:50.974 "bdev_ftl_unload", 00:04:50.974 "bdev_ftl_delete", 00:04:50.974 "bdev_ftl_load", 00:04:50.974 "bdev_ftl_create", 00:04:50.974 "bdev_virtio_attach_controller", 00:04:50.974 "bdev_virtio_scsi_get_devices", 00:04:50.974 "bdev_virtio_detach_controller", 00:04:50.974 "bdev_virtio_blk_set_hotplug", 00:04:50.974 "bdev_iscsi_delete", 00:04:50.974 "bdev_iscsi_create", 00:04:50.974 "bdev_iscsi_set_options", 00:04:50.974 "accel_error_inject_error", 00:04:50.974 "ioat_scan_accel_module", 00:04:50.974 "dsa_scan_accel_module", 00:04:50.974 "iaa_scan_accel_module", 00:04:50.974 "keyring_file_remove_key", 00:04:50.974 "keyring_file_add_key", 00:04:50.974 "keyring_linux_set_options", 00:04:50.974 "fsdev_aio_delete", 00:04:50.974 "fsdev_aio_create", 00:04:50.974 "iscsi_get_histogram", 00:04:50.974 "iscsi_enable_histogram", 00:04:50.974 "iscsi_set_options", 00:04:50.974 "iscsi_get_auth_groups", 00:04:50.974 "iscsi_auth_group_remove_secret", 00:04:50.974 "iscsi_auth_group_add_secret", 00:04:50.974 "iscsi_delete_auth_group", 00:04:50.974 "iscsi_create_auth_group", 00:04:50.974 "iscsi_set_discovery_auth", 00:04:50.974 "iscsi_get_options", 00:04:50.974 "iscsi_target_node_request_logout", 00:04:50.974 "iscsi_target_node_set_redirect", 00:04:50.974 "iscsi_target_node_set_auth", 00:04:50.974 "iscsi_target_node_add_lun", 00:04:50.974 "iscsi_get_stats", 00:04:50.974 "iscsi_get_connections", 00:04:50.974 "iscsi_portal_group_set_auth", 00:04:50.974 "iscsi_start_portal_group", 00:04:50.974 "iscsi_delete_portal_group", 00:04:50.974 "iscsi_create_portal_group", 00:04:50.974 "iscsi_get_portal_groups", 00:04:50.974 "iscsi_delete_target_node", 00:04:50.974 "iscsi_target_node_remove_pg_ig_maps", 00:04:50.974 "iscsi_target_node_add_pg_ig_maps", 00:04:50.974 "iscsi_create_target_node", 00:04:50.974 "iscsi_get_target_nodes", 00:04:50.974 "iscsi_delete_initiator_group", 00:04:50.974 "iscsi_initiator_group_remove_initiators", 00:04:50.974 "iscsi_initiator_group_add_initiators", 00:04:50.974 "iscsi_create_initiator_group", 00:04:50.974 "iscsi_get_initiator_groups", 00:04:50.974 "nvmf_set_crdt", 00:04:50.974 "nvmf_set_config", 00:04:50.974 "nvmf_set_max_subsystems", 00:04:50.974 "nvmf_stop_mdns_prr", 00:04:50.974 "nvmf_publish_mdns_prr", 00:04:50.974 "nvmf_subsystem_get_listeners", 00:04:50.974 "nvmf_subsystem_get_qpairs", 00:04:50.974 "nvmf_subsystem_get_controllers", 00:04:50.974 "nvmf_get_stats", 00:04:50.974 "nvmf_get_transports", 00:04:50.974 "nvmf_create_transport", 00:04:50.974 "nvmf_get_targets", 00:04:50.974 "nvmf_delete_target", 00:04:50.974 "nvmf_create_target", 00:04:50.974 "nvmf_subsystem_allow_any_host", 00:04:50.974 "nvmf_subsystem_set_keys", 00:04:50.974 "nvmf_subsystem_remove_host", 00:04:50.974 "nvmf_subsystem_add_host", 00:04:50.974 "nvmf_ns_remove_host", 00:04:50.974 "nvmf_ns_add_host", 00:04:50.974 "nvmf_subsystem_remove_ns", 00:04:50.974 "nvmf_subsystem_set_ns_ana_group", 00:04:50.974 "nvmf_subsystem_add_ns", 00:04:50.974 "nvmf_subsystem_listener_set_ana_state", 00:04:50.974 "nvmf_discovery_get_referrals", 00:04:50.974 "nvmf_discovery_remove_referral", 00:04:50.974 "nvmf_discovery_add_referral", 00:04:50.974 "nvmf_subsystem_remove_listener", 00:04:50.974 "nvmf_subsystem_add_listener", 00:04:50.974 "nvmf_delete_subsystem", 00:04:50.974 "nvmf_create_subsystem", 00:04:50.974 "nvmf_get_subsystems", 00:04:50.974 "env_dpdk_get_mem_stats", 00:04:50.974 "nbd_get_disks", 00:04:50.974 "nbd_stop_disk", 00:04:50.974 "nbd_start_disk", 00:04:50.974 "ublk_recover_disk", 00:04:50.974 "ublk_get_disks", 00:04:50.974 "ublk_stop_disk", 00:04:50.974 "ublk_start_disk", 00:04:50.974 "ublk_destroy_target", 00:04:50.974 "ublk_create_target", 00:04:50.974 "virtio_blk_create_transport", 00:04:50.974 "virtio_blk_get_transports", 00:04:50.974 "vhost_controller_set_coalescing", 00:04:50.974 "vhost_get_controllers", 00:04:50.974 "vhost_delete_controller", 00:04:50.974 "vhost_create_blk_controller", 00:04:50.974 "vhost_scsi_controller_remove_target", 00:04:50.974 "vhost_scsi_controller_add_target", 00:04:50.974 "vhost_start_scsi_controller", 00:04:50.974 "vhost_create_scsi_controller", 00:04:50.974 "thread_set_cpumask", 00:04:50.974 "scheduler_set_options", 00:04:50.974 "framework_get_governor", 00:04:50.974 "framework_get_scheduler", 00:04:50.974 "framework_set_scheduler", 00:04:50.974 "framework_get_reactors", 00:04:50.974 "thread_get_io_channels", 00:04:50.974 "thread_get_pollers", 00:04:50.974 "thread_get_stats", 00:04:50.974 "framework_monitor_context_switch", 00:04:50.974 "spdk_kill_instance", 00:04:50.974 "log_enable_timestamps", 00:04:50.974 "log_get_flags", 00:04:50.974 "log_clear_flag", 00:04:50.974 "log_set_flag", 00:04:50.974 "log_get_level", 00:04:50.974 "log_set_level", 00:04:50.974 "log_get_print_level", 00:04:50.974 "log_set_print_level", 00:04:50.974 "framework_enable_cpumask_locks", 00:04:50.974 "framework_disable_cpumask_locks", 00:04:50.974 "framework_wait_init", 00:04:50.974 "framework_start_init", 00:04:50.974 "scsi_get_devices", 00:04:50.974 "bdev_get_histogram", 00:04:50.974 "bdev_enable_histogram", 00:04:50.974 "bdev_set_qos_limit", 00:04:50.974 "bdev_set_qd_sampling_period", 00:04:50.974 "bdev_get_bdevs", 00:04:50.974 "bdev_reset_iostat", 00:04:50.974 "bdev_get_iostat", 00:04:50.974 "bdev_examine", 00:04:50.974 "bdev_wait_for_examine", 00:04:50.974 "bdev_set_options", 00:04:50.974 "accel_get_stats", 00:04:50.974 "accel_set_options", 00:04:50.974 "accel_set_driver", 00:04:50.974 "accel_crypto_key_destroy", 00:04:50.974 "accel_crypto_keys_get", 00:04:50.974 "accel_crypto_key_create", 00:04:50.974 "accel_assign_opc", 00:04:50.974 "accel_get_module_info", 00:04:50.974 "accel_get_opc_assignments", 00:04:50.974 "vmd_rescan", 00:04:50.974 "vmd_remove_device", 00:04:50.974 "vmd_enable", 00:04:50.974 "sock_get_default_impl", 00:04:50.974 "sock_set_default_impl", 00:04:50.974 "sock_impl_set_options", 00:04:50.974 "sock_impl_get_options", 00:04:50.974 "iobuf_get_stats", 00:04:50.974 "iobuf_set_options", 00:04:50.974 "keyring_get_keys", 00:04:50.974 "framework_get_pci_devices", 00:04:50.974 "framework_get_config", 00:04:50.974 "framework_get_subsystems", 00:04:50.974 "fsdev_set_opts", 00:04:50.974 "fsdev_get_opts", 00:04:50.974 "trace_get_info", 00:04:50.974 "trace_get_tpoint_group_mask", 00:04:50.974 "trace_disable_tpoint_group", 00:04:50.974 "trace_enable_tpoint_group", 00:04:50.974 "trace_clear_tpoint_mask", 00:04:50.974 "trace_set_tpoint_mask", 00:04:50.974 "notify_get_notifications", 00:04:50.974 "notify_get_types", 00:04:50.974 "spdk_get_version", 00:04:50.974 "rpc_get_methods" 00:04:50.974 ] 00:04:50.974 10:36:11 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:50.974 10:36:11 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:50.974 10:36:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:50.974 10:36:11 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:50.974 10:36:11 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 71540 00:04:50.974 10:36:11 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 71540 ']' 00:04:50.974 10:36:11 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 71540 00:04:50.974 10:36:11 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:50.974 10:36:11 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:50.974 10:36:11 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71540 00:04:50.974 10:36:11 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:50.974 killing process with pid 71540 00:04:50.974 10:36:11 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:50.974 10:36:11 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71540' 00:04:50.974 10:36:11 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 71540 00:04:50.974 10:36:11 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 71540 00:04:51.547 ************************************ 00:04:51.547 END TEST spdkcli_tcp 00:04:51.547 ************************************ 00:04:51.547 00:04:51.547 real 0m1.714s 00:04:51.547 user 0m2.940s 00:04:51.547 sys 0m0.497s 00:04:51.547 10:36:11 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.547 10:36:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:51.547 10:36:11 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:51.547 10:36:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:51.547 10:36:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:51.547 10:36:11 -- common/autotest_common.sh@10 -- # set +x 00:04:51.547 ************************************ 00:04:51.547 START TEST dpdk_mem_utility 00:04:51.547 ************************************ 00:04:51.547 10:36:11 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:51.547 * Looking for test storage... 00:04:51.547 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:51.547 10:36:12 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:51.547 10:36:12 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:04:51.547 10:36:12 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:51.547 10:36:12 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:51.547 10:36:12 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.547 10:36:12 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.547 10:36:12 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.547 10:36:12 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.547 10:36:12 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.547 10:36:12 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.547 10:36:12 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.547 10:36:12 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.547 10:36:12 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.547 10:36:12 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.547 10:36:12 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.547 10:36:12 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:51.547 10:36:12 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:51.547 10:36:12 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.547 10:36:12 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.547 10:36:12 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:51.547 10:36:12 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:51.547 10:36:12 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.547 10:36:12 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:51.547 10:36:12 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.547 10:36:12 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:51.547 10:36:12 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:51.547 10:36:12 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.547 10:36:12 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:51.547 10:36:12 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.547 10:36:12 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.547 10:36:12 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.547 10:36:12 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:51.547 10:36:12 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.547 10:36:12 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:51.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.547 --rc genhtml_branch_coverage=1 00:04:51.547 --rc genhtml_function_coverage=1 00:04:51.547 --rc genhtml_legend=1 00:04:51.547 --rc geninfo_all_blocks=1 00:04:51.547 --rc geninfo_unexecuted_blocks=1 00:04:51.547 00:04:51.547 ' 00:04:51.547 10:36:12 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:51.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.547 --rc genhtml_branch_coverage=1 00:04:51.547 --rc genhtml_function_coverage=1 00:04:51.547 --rc genhtml_legend=1 00:04:51.547 --rc geninfo_all_blocks=1 00:04:51.547 --rc geninfo_unexecuted_blocks=1 00:04:51.547 00:04:51.547 ' 00:04:51.547 10:36:12 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:51.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.547 --rc genhtml_branch_coverage=1 00:04:51.547 --rc genhtml_function_coverage=1 00:04:51.547 --rc genhtml_legend=1 00:04:51.547 --rc geninfo_all_blocks=1 00:04:51.547 --rc geninfo_unexecuted_blocks=1 00:04:51.547 00:04:51.547 ' 00:04:51.547 10:36:12 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:51.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.547 --rc genhtml_branch_coverage=1 00:04:51.547 --rc genhtml_function_coverage=1 00:04:51.547 --rc genhtml_legend=1 00:04:51.547 --rc geninfo_all_blocks=1 00:04:51.547 --rc geninfo_unexecuted_blocks=1 00:04:51.547 00:04:51.547 ' 00:04:51.547 10:36:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:51.547 10:36:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=71634 00:04:51.547 10:36:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 71634 00:04:51.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.547 10:36:12 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 71634 ']' 00:04:51.547 10:36:12 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.547 10:36:12 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:51.547 10:36:12 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.547 10:36:12 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:51.547 10:36:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:51.547 10:36:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:51.808 [2024-10-08 10:36:12.185557] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:04:51.808 [2024-10-08 10:36:12.185704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71634 ] 00:04:51.808 [2024-10-08 10:36:12.318189] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:51.808 [2024-10-08 10:36:12.338364] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.070 [2024-10-08 10:36:12.388889] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.644 10:36:13 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:52.644 10:36:13 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:52.644 10:36:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:52.644 10:36:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:52.644 10:36:13 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.644 10:36:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:52.644 { 00:04:52.644 "filename": "/tmp/spdk_mem_dump.txt" 00:04:52.644 } 00:04:52.644 10:36:13 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.644 10:36:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:52.644 DPDK memory size 860.000000 MiB in 1 heap(s) 00:04:52.644 1 heaps totaling size 860.000000 MiB 00:04:52.644 size: 860.000000 MiB heap id: 0 00:04:52.644 end heaps---------- 00:04:52.644 9 mempools totaling size 642.649841 MiB 00:04:52.644 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:52.644 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:52.644 size: 92.545471 MiB name: bdev_io_71634 00:04:52.644 size: 51.011292 MiB name: evtpool_71634 00:04:52.644 size: 50.003479 MiB name: msgpool_71634 00:04:52.644 size: 36.509338 MiB name: fsdev_io_71634 00:04:52.644 size: 21.763794 MiB name: PDU_Pool 00:04:52.644 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:52.644 size: 0.026123 MiB name: Session_Pool 00:04:52.644 end mempools------- 00:04:52.644 6 memzones totaling size 4.142822 MiB 00:04:52.644 size: 1.000366 MiB name: RG_ring_0_71634 00:04:52.644 size: 1.000366 MiB name: RG_ring_1_71634 00:04:52.644 size: 1.000366 MiB name: RG_ring_4_71634 00:04:52.644 size: 1.000366 MiB name: RG_ring_5_71634 00:04:52.644 size: 0.125366 MiB name: RG_ring_2_71634 00:04:52.644 size: 0.015991 MiB name: RG_ring_3_71634 00:04:52.644 end memzones------- 00:04:52.644 10:36:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:52.644 heap id: 0 total size: 860.000000 MiB number of busy elements: 305 number of free elements: 16 00:04:52.644 list of free elements. size: 13.811890 MiB 00:04:52.644 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:52.644 element at address: 0x200000800000 with size: 1.996948 MiB 00:04:52.644 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:04:52.644 element at address: 0x20001be00000 with size: 0.999878 MiB 00:04:52.644 element at address: 0x200034a00000 with size: 0.994446 MiB 00:04:52.644 element at address: 0x200009600000 with size: 0.959839 MiB 00:04:52.644 element at address: 0x200015e00000 with size: 0.954285 MiB 00:04:52.644 element at address: 0x20001c000000 with size: 0.936584 MiB 00:04:52.644 element at address: 0x200000200000 with size: 0.709839 MiB 00:04:52.644 element at address: 0x20001d800000 with size: 0.568237 MiB 00:04:52.644 element at address: 0x20000d800000 with size: 0.489258 MiB 00:04:52.644 element at address: 0x200003e00000 with size: 0.488098 MiB 00:04:52.644 element at address: 0x20001c200000 with size: 0.485657 MiB 00:04:52.644 element at address: 0x200007000000 with size: 0.480469 MiB 00:04:52.644 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:04:52.644 element at address: 0x200003a00000 with size: 0.353210 MiB 00:04:52.644 list of standard malloc elements. size: 199.391418 MiB 00:04:52.644 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:04:52.644 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:04:52.644 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:04:52.644 element at address: 0x20001befff80 with size: 1.000122 MiB 00:04:52.644 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:04:52.644 element at address: 0x2000003b9f00 with size: 0.265747 MiB 00:04:52.644 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:04:52.644 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:52.644 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:04:52.644 element at address: 0x2000002b5b80 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b5c40 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b5d00 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b5dc0 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b5e80 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b5f40 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b6000 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b60c0 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b6180 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b6240 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b6300 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b63c0 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b6480 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b6540 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b6600 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b66c0 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b68c0 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b6980 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b6a40 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b6b00 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b6bc0 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b6c80 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b6d40 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b6e00 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b6ec0 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b6f80 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b7040 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b7100 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b71c0 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b7280 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b7340 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b7400 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b74c0 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b7580 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b7640 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b7700 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b77c0 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b7880 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b7940 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b7a00 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b7ac0 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b7b80 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000002b7c40 with size: 0.000183 MiB 00:04:52.644 element at address: 0x2000003b9e40 with size: 0.000183 MiB 00:04:52.644 element at address: 0x200003a5a6c0 with size: 0.000183 MiB 00:04:52.644 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:52.644 element at address: 0x200003a5eb80 with size: 0.000183 MiB 00:04:52.644 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:04:52.644 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:04:52.644 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:04:52.644 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:04:52.644 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:04:52.644 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:04:52.644 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:04:52.644 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:04:52.644 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:04:52.644 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003a7f680 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003aff940 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7cf40 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7d000 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003eff000 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20000707b000 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20000707b180 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20000707b240 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20000707b300 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20000707b480 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20000707b540 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20000707b600 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:04:52.645 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:04:52.645 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d891780 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d891840 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d891900 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d892080 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d892140 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d892200 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d892380 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d892440 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d892500 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d892680 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d892740 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d892800 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d892980 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d893040 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d893100 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d893280 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d893340 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d893400 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d893580 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d893640 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d893700 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d893880 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d893940 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d894000 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d894180 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d894240 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d894300 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d894480 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d894540 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d894600 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d894780 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d894840 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d894900 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d895080 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d895140 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d895200 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d895380 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20001d895440 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:04:52.645 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:04:52.646 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:04:52.646 list of memzone associated elements. size: 646.796692 MiB 00:04:52.646 element at address: 0x20001d895500 with size: 211.416748 MiB 00:04:52.646 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:52.646 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:04:52.646 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:52.646 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:04:52.646 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_71634_0 00:04:52.646 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:52.646 associated memzone info: size: 48.002930 MiB name: MP_evtpool_71634_0 00:04:52.646 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:52.646 associated memzone info: size: 48.002930 MiB name: MP_msgpool_71634_0 00:04:52.646 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:04:52.646 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_71634_0 00:04:52.646 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:04:52.646 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:52.646 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:04:52.646 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:52.646 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:52.646 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_71634 00:04:52.646 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:52.646 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_71634 00:04:52.646 element at address: 0x2000002b7d00 with size: 1.008118 MiB 00:04:52.646 associated memzone info: size: 1.007996 MiB name: MP_evtpool_71634 00:04:52.646 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:04:52.646 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:52.646 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:04:52.646 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:52.646 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:04:52.646 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:52.646 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:04:52.646 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:52.646 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:52.646 associated memzone info: size: 1.000366 MiB name: RG_ring_0_71634 00:04:52.646 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:52.646 associated memzone info: size: 1.000366 MiB name: RG_ring_1_71634 00:04:52.646 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:04:52.646 associated memzone info: size: 1.000366 MiB name: RG_ring_4_71634 00:04:52.646 element at address: 0x200034afe940 with size: 1.000488 MiB 00:04:52.646 associated memzone info: size: 1.000366 MiB name: RG_ring_5_71634 00:04:52.646 element at address: 0x200003a7f740 with size: 0.500488 MiB 00:04:52.646 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_71634 00:04:52.646 element at address: 0x200003e7ee00 with size: 0.500488 MiB 00:04:52.646 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_71634 00:04:52.646 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:04:52.646 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:52.646 element at address: 0x20000707b780 with size: 0.500488 MiB 00:04:52.646 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:52.646 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:04:52.646 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:52.646 element at address: 0x200003a5ec40 with size: 0.125488 MiB 00:04:52.646 associated memzone info: size: 0.125366 MiB name: RG_ring_2_71634 00:04:52.646 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:04:52.646 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:52.646 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:04:52.646 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:52.646 element at address: 0x200003a5a980 with size: 0.016113 MiB 00:04:52.646 associated memzone info: size: 0.015991 MiB name: RG_ring_3_71634 00:04:52.646 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:04:52.646 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:52.646 element at address: 0x2000002b6780 with size: 0.000305 MiB 00:04:52.646 associated memzone info: size: 0.000183 MiB name: MP_msgpool_71634 00:04:52.646 element at address: 0x200003affa00 with size: 0.000305 MiB 00:04:52.646 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_71634 00:04:52.646 element at address: 0x200003a5a780 with size: 0.000305 MiB 00:04:52.646 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_71634 00:04:52.646 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:04:52.646 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:52.646 10:36:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:52.646 10:36:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 71634 00:04:52.646 10:36:13 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 71634 ']' 00:04:52.646 10:36:13 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 71634 00:04:52.646 10:36:13 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:52.646 10:36:13 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:52.646 10:36:13 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71634 00:04:52.646 10:36:13 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:52.646 10:36:13 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:52.646 killing process with pid 71634 00:04:52.646 10:36:13 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71634' 00:04:52.647 10:36:13 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 71634 00:04:52.647 10:36:13 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 71634 00:04:53.219 00:04:53.219 real 0m1.612s 00:04:53.219 user 0m1.573s 00:04:53.219 sys 0m0.494s 00:04:53.219 10:36:13 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.219 ************************************ 00:04:53.219 END TEST dpdk_mem_utility 00:04:53.219 ************************************ 00:04:53.219 10:36:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:53.219 10:36:13 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:53.219 10:36:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:53.219 10:36:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.219 10:36:13 -- common/autotest_common.sh@10 -- # set +x 00:04:53.219 ************************************ 00:04:53.219 START TEST event 00:04:53.219 ************************************ 00:04:53.219 10:36:13 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:53.219 * Looking for test storage... 00:04:53.219 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:53.219 10:36:13 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:53.219 10:36:13 event -- common/autotest_common.sh@1681 -- # lcov --version 00:04:53.219 10:36:13 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:53.219 10:36:13 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:53.219 10:36:13 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.219 10:36:13 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.219 10:36:13 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.219 10:36:13 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.219 10:36:13 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.219 10:36:13 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.219 10:36:13 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.219 10:36:13 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.219 10:36:13 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.219 10:36:13 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.219 10:36:13 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.219 10:36:13 event -- scripts/common.sh@344 -- # case "$op" in 00:04:53.219 10:36:13 event -- scripts/common.sh@345 -- # : 1 00:04:53.219 10:36:13 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.219 10:36:13 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.219 10:36:13 event -- scripts/common.sh@365 -- # decimal 1 00:04:53.219 10:36:13 event -- scripts/common.sh@353 -- # local d=1 00:04:53.219 10:36:13 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.219 10:36:13 event -- scripts/common.sh@355 -- # echo 1 00:04:53.219 10:36:13 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.219 10:36:13 event -- scripts/common.sh@366 -- # decimal 2 00:04:53.219 10:36:13 event -- scripts/common.sh@353 -- # local d=2 00:04:53.219 10:36:13 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.220 10:36:13 event -- scripts/common.sh@355 -- # echo 2 00:04:53.220 10:36:13 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.220 10:36:13 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.220 10:36:13 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.220 10:36:13 event -- scripts/common.sh@368 -- # return 0 00:04:53.220 10:36:13 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.220 10:36:13 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:53.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.220 --rc genhtml_branch_coverage=1 00:04:53.220 --rc genhtml_function_coverage=1 00:04:53.220 --rc genhtml_legend=1 00:04:53.220 --rc geninfo_all_blocks=1 00:04:53.220 --rc geninfo_unexecuted_blocks=1 00:04:53.220 00:04:53.220 ' 00:04:53.220 10:36:13 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:53.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.220 --rc genhtml_branch_coverage=1 00:04:53.220 --rc genhtml_function_coverage=1 00:04:53.220 --rc genhtml_legend=1 00:04:53.220 --rc geninfo_all_blocks=1 00:04:53.220 --rc geninfo_unexecuted_blocks=1 00:04:53.220 00:04:53.220 ' 00:04:53.220 10:36:13 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:53.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.220 --rc genhtml_branch_coverage=1 00:04:53.220 --rc genhtml_function_coverage=1 00:04:53.220 --rc genhtml_legend=1 00:04:53.220 --rc geninfo_all_blocks=1 00:04:53.220 --rc geninfo_unexecuted_blocks=1 00:04:53.220 00:04:53.220 ' 00:04:53.220 10:36:13 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:53.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.220 --rc genhtml_branch_coverage=1 00:04:53.220 --rc genhtml_function_coverage=1 00:04:53.220 --rc genhtml_legend=1 00:04:53.220 --rc geninfo_all_blocks=1 00:04:53.220 --rc geninfo_unexecuted_blocks=1 00:04:53.220 00:04:53.220 ' 00:04:53.220 10:36:13 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:53.220 10:36:13 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:53.220 10:36:13 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:53.220 10:36:13 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:04:53.220 10:36:13 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.220 10:36:13 event -- common/autotest_common.sh@10 -- # set +x 00:04:53.220 ************************************ 00:04:53.220 START TEST event_perf 00:04:53.220 ************************************ 00:04:53.220 10:36:13 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:53.481 Running I/O for 1 seconds...[2024-10-08 10:36:13.810267] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:04:53.481 [2024-10-08 10:36:13.810407] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71715 ] 00:04:53.481 [2024-10-08 10:36:13.948719] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:53.481 [2024-10-08 10:36:13.966469] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:53.481 [2024-10-08 10:36:14.019075] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.481 [2024-10-08 10:36:14.019414] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:04:53.481 Running I/O for 1 seconds...[2024-10-08 10:36:14.019829] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:04:53.481 [2024-10-08 10:36:14.019831] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.867 00:04:54.867 lcore 0: 133292 00:04:54.867 lcore 1: 133290 00:04:54.867 lcore 2: 133288 00:04:54.867 lcore 3: 133290 00:04:54.867 done. 00:04:54.867 00:04:54.867 real 0m1.329s 00:04:54.867 user 0m4.101s 00:04:54.867 sys 0m0.104s 00:04:54.867 10:36:15 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.867 ************************************ 00:04:54.867 END TEST event_perf 00:04:54.867 ************************************ 00:04:54.867 10:36:15 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:54.867 10:36:15 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:54.867 10:36:15 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:54.867 10:36:15 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.867 10:36:15 event -- common/autotest_common.sh@10 -- # set +x 00:04:54.867 ************************************ 00:04:54.867 START TEST event_reactor 00:04:54.867 ************************************ 00:04:54.867 10:36:15 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:54.867 [2024-10-08 10:36:15.205637] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:04:54.867 [2024-10-08 10:36:15.205760] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71754 ] 00:04:54.867 [2024-10-08 10:36:15.335365] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:54.867 [2024-10-08 10:36:15.349242] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.867 [2024-10-08 10:36:15.398096] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.252 test_start 00:04:56.252 oneshot 00:04:56.252 tick 100 00:04:56.252 tick 100 00:04:56.252 tick 250 00:04:56.252 tick 100 00:04:56.252 tick 100 00:04:56.252 tick 100 00:04:56.252 tick 250 00:04:56.252 tick 500 00:04:56.252 tick 100 00:04:56.252 tick 100 00:04:56.252 tick 250 00:04:56.252 tick 100 00:04:56.252 tick 100 00:04:56.252 test_end 00:04:56.252 00:04:56.252 real 0m1.300s 00:04:56.252 user 0m1.106s 00:04:56.252 sys 0m0.083s 00:04:56.252 10:36:16 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.252 ************************************ 00:04:56.252 END TEST event_reactor 00:04:56.252 ************************************ 00:04:56.252 10:36:16 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:56.252 10:36:16 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:56.252 10:36:16 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:56.252 10:36:16 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.252 10:36:16 event -- common/autotest_common.sh@10 -- # set +x 00:04:56.252 ************************************ 00:04:56.252 START TEST event_reactor_perf 00:04:56.252 ************************************ 00:04:56.252 10:36:16 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:56.252 [2024-10-08 10:36:16.564152] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:04:56.252 [2024-10-08 10:36:16.564277] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71791 ] 00:04:56.252 [2024-10-08 10:36:16.693722] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:56.252 [2024-10-08 10:36:16.714653] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.252 [2024-10-08 10:36:16.763860] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.637 test_start 00:04:57.637 test_end 00:04:57.637 Performance: 310642 events per second 00:04:57.637 00:04:57.637 real 0m1.311s 00:04:57.637 user 0m1.120s 00:04:57.637 sys 0m0.080s 00:04:57.637 10:36:17 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:57.637 ************************************ 00:04:57.637 10:36:17 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:57.637 END TEST event_reactor_perf 00:04:57.637 ************************************ 00:04:57.637 10:36:17 event -- event/event.sh@49 -- # uname -s 00:04:57.637 10:36:17 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:57.637 10:36:17 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:57.637 10:36:17 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:57.637 10:36:17 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.638 10:36:17 event -- common/autotest_common.sh@10 -- # set +x 00:04:57.638 ************************************ 00:04:57.638 START TEST event_scheduler 00:04:57.638 ************************************ 00:04:57.638 10:36:17 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:57.638 * Looking for test storage... 00:04:57.638 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:57.638 10:36:17 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:57.638 10:36:17 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:04:57.638 10:36:17 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:57.638 10:36:18 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:57.638 10:36:18 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.638 10:36:18 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.638 10:36:18 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.638 10:36:18 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.638 10:36:18 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.638 10:36:18 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.638 10:36:18 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.638 10:36:18 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.638 10:36:18 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.638 10:36:18 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.638 10:36:18 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.638 10:36:18 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:57.638 10:36:18 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:57.638 10:36:18 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.638 10:36:18 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.638 10:36:18 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:57.638 10:36:18 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:57.638 10:36:18 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.638 10:36:18 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:57.638 10:36:18 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.638 10:36:18 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:57.638 10:36:18 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:57.638 10:36:18 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.638 10:36:18 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:57.638 10:36:18 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.638 10:36:18 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.638 10:36:18 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.638 10:36:18 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:57.638 10:36:18 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.638 10:36:18 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:57.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.638 --rc genhtml_branch_coverage=1 00:04:57.638 --rc genhtml_function_coverage=1 00:04:57.638 --rc genhtml_legend=1 00:04:57.638 --rc geninfo_all_blocks=1 00:04:57.638 --rc geninfo_unexecuted_blocks=1 00:04:57.638 00:04:57.638 ' 00:04:57.638 10:36:18 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:57.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.638 --rc genhtml_branch_coverage=1 00:04:57.638 --rc genhtml_function_coverage=1 00:04:57.638 --rc genhtml_legend=1 00:04:57.638 --rc geninfo_all_blocks=1 00:04:57.638 --rc geninfo_unexecuted_blocks=1 00:04:57.638 00:04:57.638 ' 00:04:57.638 10:36:18 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:57.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.638 --rc genhtml_branch_coverage=1 00:04:57.638 --rc genhtml_function_coverage=1 00:04:57.638 --rc genhtml_legend=1 00:04:57.638 --rc geninfo_all_blocks=1 00:04:57.638 --rc geninfo_unexecuted_blocks=1 00:04:57.638 00:04:57.638 ' 00:04:57.638 10:36:18 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:57.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.638 --rc genhtml_branch_coverage=1 00:04:57.638 --rc genhtml_function_coverage=1 00:04:57.638 --rc genhtml_legend=1 00:04:57.638 --rc geninfo_all_blocks=1 00:04:57.638 --rc geninfo_unexecuted_blocks=1 00:04:57.638 00:04:57.638 ' 00:04:57.638 10:36:18 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:57.638 10:36:18 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=71861 00:04:57.638 10:36:18 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.638 10:36:18 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 71861 00:04:57.638 10:36:18 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 71861 ']' 00:04:57.638 10:36:18 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.638 10:36:18 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:57.638 10:36:18 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.638 10:36:18 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:57.638 10:36:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:57.638 10:36:18 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:57.638 [2024-10-08 10:36:18.137903] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:04:57.638 [2024-10-08 10:36:18.138057] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71861 ] 00:04:57.903 [2024-10-08 10:36:18.271898] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:57.903 [2024-10-08 10:36:18.284817] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:57.903 [2024-10-08 10:36:18.349842] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.903 [2024-10-08 10:36:18.349941] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.903 [2024-10-08 10:36:18.350189] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:04:57.903 [2024-10-08 10:36:18.350200] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:04:58.476 10:36:19 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:58.476 10:36:19 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:04:58.476 10:36:19 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:58.476 10:36:19 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.476 10:36:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:58.476 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:58.476 POWER: Cannot set governor of lcore 0 to userspace 00:04:58.476 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:58.476 POWER: Cannot set governor of lcore 0 to performance 00:04:58.476 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:58.476 POWER: Cannot set governor of lcore 0 to userspace 00:04:58.476 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:58.476 POWER: Cannot set governor of lcore 0 to userspace 00:04:58.476 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:58.476 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:58.476 POWER: Unable to set Power Management Environment for lcore 0 00:04:58.476 [2024-10-08 10:36:19.020133] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:04:58.476 [2024-10-08 10:36:19.020153] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:04:58.476 [2024-10-08 10:36:19.020167] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:58.477 [2024-10-08 10:36:19.020209] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:58.477 [2024-10-08 10:36:19.020233] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:58.477 [2024-10-08 10:36:19.020253] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:58.477 10:36:19 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.477 10:36:19 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:58.477 10:36:19 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.477 10:36:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:58.738 [2024-10-08 10:36:19.104498] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:58.738 10:36:19 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.738 10:36:19 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:58.738 10:36:19 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.738 10:36:19 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.738 10:36:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:58.738 ************************************ 00:04:58.738 START TEST scheduler_create_thread 00:04:58.738 ************************************ 00:04:58.738 10:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:04:58.738 10:36:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:58.738 10:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.738 10:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.738 2 00:04:58.738 10:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.738 10:36:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:58.738 10:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.738 10:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.738 3 00:04:58.738 10:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.738 10:36:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:58.738 10:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.738 10:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.738 4 00:04:58.738 10:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.738 10:36:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:58.738 10:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.738 10:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.738 5 00:04:58.738 10:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.738 10:36:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:58.738 10:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.738 10:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.738 6 00:04:58.738 10:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.738 10:36:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:58.738 10:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.738 10:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.738 7 00:04:58.738 10:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.738 10:36:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:58.738 10:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.738 10:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.738 8 00:04:58.738 10:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.738 10:36:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:58.738 10:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.738 10:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.738 9 00:04:58.738 10:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.738 10:36:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:58.738 10:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.738 10:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.738 10 00:04:58.738 10:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.738 10:36:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:58.739 10:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.739 10:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.739 10:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.739 10:36:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:58.739 10:36:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:58.739 10:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.739 10:36:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.713 10:36:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.713 10:36:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:59.713 10:36:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.713 10:36:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.094 10:36:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:01.094 10:36:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:01.094 10:36:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:01.094 10:36:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:01.094 10:36:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.028 ************************************ 00:05:02.029 END TEST scheduler_create_thread 00:05:02.029 ************************************ 00:05:02.029 10:36:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:02.029 00:05:02.029 real 0m3.370s 00:05:02.029 user 0m0.016s 00:05:02.029 sys 0m0.005s 00:05:02.029 10:36:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.029 10:36:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.029 10:36:22 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:02.029 10:36:22 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 71861 00:05:02.029 10:36:22 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 71861 ']' 00:05:02.029 10:36:22 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 71861 00:05:02.029 10:36:22 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:02.029 10:36:22 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:02.029 10:36:22 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71861 00:05:02.029 killing process with pid 71861 00:05:02.029 10:36:22 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:02.029 10:36:22 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:02.029 10:36:22 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71861' 00:05:02.029 10:36:22 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 71861 00:05:02.029 10:36:22 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 71861 00:05:02.597 [2024-10-08 10:36:22.872227] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:02.597 00:05:02.597 real 0m5.143s 00:05:02.597 user 0m10.189s 00:05:02.597 sys 0m0.383s 00:05:02.597 ************************************ 00:05:02.597 END TEST event_scheduler 00:05:02.597 ************************************ 00:05:02.597 10:36:23 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.597 10:36:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:02.597 10:36:23 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:02.597 10:36:23 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:02.597 10:36:23 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.597 10:36:23 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.597 10:36:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:02.597 ************************************ 00:05:02.597 START TEST app_repeat 00:05:02.597 ************************************ 00:05:02.597 10:36:23 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:02.597 10:36:23 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.597 10:36:23 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.597 10:36:23 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:02.597 10:36:23 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:02.597 10:36:23 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:02.597 10:36:23 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:02.597 10:36:23 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:02.597 10:36:23 event.app_repeat -- event/event.sh@19 -- # repeat_pid=71962 00:05:02.597 10:36:23 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.597 Process app_repeat pid: 71962 00:05:02.597 10:36:23 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 71962' 00:05:02.597 10:36:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:02.597 spdk_app_start Round 0 00:05:02.597 10:36:23 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:02.597 10:36:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:02.597 10:36:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71962 /var/tmp/spdk-nbd.sock 00:05:02.597 10:36:23 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71962 ']' 00:05:02.597 10:36:23 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:02.597 10:36:23 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:02.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:02.597 10:36:23 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:02.597 10:36:23 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:02.597 10:36:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:02.597 [2024-10-08 10:36:23.149251] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:05:02.597 [2024-10-08 10:36:23.149366] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71962 ] 00:05:02.856 [2024-10-08 10:36:23.278399] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:02.856 [2024-10-08 10:36:23.297470] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.856 [2024-10-08 10:36:23.338553] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.856 [2024-10-08 10:36:23.338647] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.790 10:36:24 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:03.790 10:36:24 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:03.790 10:36:24 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:03.790 Malloc0 00:05:03.791 10:36:24 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.048 Malloc1 00:05:04.048 10:36:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.048 10:36:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.048 10:36:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.048 10:36:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:04.048 10:36:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.048 10:36:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:04.048 10:36:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.048 10:36:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.048 10:36:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.048 10:36:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:04.048 10:36:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.048 10:36:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:04.048 10:36:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:04.048 10:36:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:04.048 10:36:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.048 10:36:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:04.048 /dev/nbd0 00:05:04.048 10:36:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:04.307 10:36:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:04.307 10:36:24 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:04.307 10:36:24 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:04.307 10:36:24 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:04.307 10:36:24 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:04.307 10:36:24 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:04.307 10:36:24 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:04.307 10:36:24 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:04.307 10:36:24 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:04.307 10:36:24 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:04.307 1+0 records in 00:05:04.307 1+0 records out 00:05:04.307 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186169 s, 22.0 MB/s 00:05:04.307 10:36:24 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:04.307 10:36:24 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:04.307 10:36:24 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:04.307 10:36:24 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:04.307 10:36:24 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:04.307 10:36:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:04.307 10:36:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.307 10:36:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:04.307 /dev/nbd1 00:05:04.307 10:36:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:04.307 10:36:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:04.307 10:36:24 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:04.307 10:36:24 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:04.307 10:36:24 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:04.307 10:36:24 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:04.307 10:36:24 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:04.307 10:36:24 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:04.307 10:36:24 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:04.307 10:36:24 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:04.307 10:36:24 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:04.307 1+0 records in 00:05:04.307 1+0 records out 00:05:04.307 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024195 s, 16.9 MB/s 00:05:04.307 10:36:24 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:04.307 10:36:24 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:04.307 10:36:24 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:04.307 10:36:24 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:04.307 10:36:24 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:04.307 10:36:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:04.307 10:36:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.307 10:36:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:04.307 10:36:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.307 10:36:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:04.566 { 00:05:04.566 "nbd_device": "/dev/nbd0", 00:05:04.566 "bdev_name": "Malloc0" 00:05:04.566 }, 00:05:04.566 { 00:05:04.566 "nbd_device": "/dev/nbd1", 00:05:04.566 "bdev_name": "Malloc1" 00:05:04.566 } 00:05:04.566 ]' 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:04.566 { 00:05:04.566 "nbd_device": "/dev/nbd0", 00:05:04.566 "bdev_name": "Malloc0" 00:05:04.566 }, 00:05:04.566 { 00:05:04.566 "nbd_device": "/dev/nbd1", 00:05:04.566 "bdev_name": "Malloc1" 00:05:04.566 } 00:05:04.566 ]' 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:04.566 /dev/nbd1' 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:04.566 /dev/nbd1' 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:04.566 256+0 records in 00:05:04.566 256+0 records out 00:05:04.566 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00676049 s, 155 MB/s 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:04.566 256+0 records in 00:05:04.566 256+0 records out 00:05:04.566 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014816 s, 70.8 MB/s 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:04.566 256+0 records in 00:05:04.566 256+0 records out 00:05:04.566 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158181 s, 66.3 MB/s 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:04.566 10:36:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:04.825 10:36:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:04.825 10:36:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:04.825 10:36:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:04.825 10:36:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:04.825 10:36:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:04.825 10:36:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:04.825 10:36:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:04.825 10:36:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:04.825 10:36:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:04.825 10:36:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:05.083 10:36:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:05.083 10:36:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:05.083 10:36:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:05.083 10:36:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.083 10:36:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.083 10:36:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:05.083 10:36:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:05.083 10:36:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.083 10:36:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.083 10:36:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.083 10:36:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.342 10:36:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:05.342 10:36:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:05.342 10:36:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.342 10:36:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:05.342 10:36:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.342 10:36:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:05.342 10:36:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:05.342 10:36:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:05.342 10:36:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:05.342 10:36:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:05.342 10:36:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:05.342 10:36:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:05.342 10:36:25 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:05.600 10:36:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:05.600 [2024-10-08 10:36:26.054472] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:05.600 [2024-10-08 10:36:26.080815] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.600 [2024-10-08 10:36:26.080829] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.600 [2024-10-08 10:36:26.109746] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:05.600 [2024-10-08 10:36:26.109811] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:08.882 spdk_app_start Round 1 00:05:08.882 10:36:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:08.882 10:36:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:08.882 10:36:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71962 /var/tmp/spdk-nbd.sock 00:05:08.882 10:36:28 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71962 ']' 00:05:08.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:08.882 10:36:28 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:08.882 10:36:28 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:08.882 10:36:28 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:08.882 10:36:28 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:08.882 10:36:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:08.882 10:36:29 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:08.882 10:36:29 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:08.882 10:36:29 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:08.882 Malloc0 00:05:08.882 10:36:29 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.140 Malloc1 00:05:09.140 10:36:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.140 10:36:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.140 10:36:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.140 10:36:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:09.140 10:36:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.140 10:36:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:09.140 10:36:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.140 10:36:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.140 10:36:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.140 10:36:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:09.140 10:36:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.140 10:36:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:09.140 10:36:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:09.140 10:36:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:09.140 10:36:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.140 10:36:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:09.399 /dev/nbd0 00:05:09.399 10:36:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:09.399 10:36:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:09.399 10:36:29 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:09.399 10:36:29 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:09.399 10:36:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:09.399 10:36:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:09.399 10:36:29 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:09.399 10:36:29 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:09.399 10:36:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:09.399 10:36:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:09.399 10:36:29 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:09.399 1+0 records in 00:05:09.399 1+0 records out 00:05:09.399 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000171474 s, 23.9 MB/s 00:05:09.399 10:36:29 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:09.399 10:36:29 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:09.399 10:36:29 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:09.399 10:36:29 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:09.399 10:36:29 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:09.399 10:36:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:09.399 10:36:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.399 10:36:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:09.657 /dev/nbd1 00:05:09.657 10:36:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:09.657 10:36:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:09.657 10:36:30 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:09.657 10:36:30 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:09.657 10:36:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:09.657 10:36:30 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:09.657 10:36:30 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:09.657 10:36:30 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:09.657 10:36:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:09.657 10:36:30 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:09.657 10:36:30 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:09.657 1+0 records in 00:05:09.657 1+0 records out 00:05:09.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288277 s, 14.2 MB/s 00:05:09.657 10:36:30 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:09.657 10:36:30 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:09.657 10:36:30 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:09.657 10:36:30 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:09.657 10:36:30 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:09.657 10:36:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:09.657 10:36:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.657 10:36:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:09.657 10:36:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.657 10:36:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:09.915 10:36:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:09.915 { 00:05:09.915 "nbd_device": "/dev/nbd0", 00:05:09.915 "bdev_name": "Malloc0" 00:05:09.915 }, 00:05:09.915 { 00:05:09.915 "nbd_device": "/dev/nbd1", 00:05:09.915 "bdev_name": "Malloc1" 00:05:09.915 } 00:05:09.915 ]' 00:05:09.915 10:36:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:09.915 { 00:05:09.915 "nbd_device": "/dev/nbd0", 00:05:09.915 "bdev_name": "Malloc0" 00:05:09.915 }, 00:05:09.915 { 00:05:09.915 "nbd_device": "/dev/nbd1", 00:05:09.915 "bdev_name": "Malloc1" 00:05:09.915 } 00:05:09.915 ]' 00:05:09.915 10:36:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:09.915 10:36:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:09.915 /dev/nbd1' 00:05:09.915 10:36:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:09.915 /dev/nbd1' 00:05:09.915 10:36:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:09.915 10:36:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:09.915 10:36:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:09.915 10:36:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:09.915 10:36:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:09.915 10:36:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:09.915 10:36:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.915 10:36:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:09.915 10:36:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:09.915 10:36:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:09.915 10:36:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:09.915 10:36:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:09.915 256+0 records in 00:05:09.915 256+0 records out 00:05:09.915 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00613845 s, 171 MB/s 00:05:09.915 10:36:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:09.915 10:36:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:09.915 256+0 records in 00:05:09.915 256+0 records out 00:05:09.915 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240425 s, 43.6 MB/s 00:05:09.915 10:36:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:09.915 10:36:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:09.915 256+0 records in 00:05:09.915 256+0 records out 00:05:09.915 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023394 s, 44.8 MB/s 00:05:09.915 10:36:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:09.915 10:36:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.915 10:36:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:09.915 10:36:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:09.915 10:36:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:09.915 10:36:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:09.915 10:36:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:09.915 10:36:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:09.915 10:36:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:09.915 10:36:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:09.915 10:36:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:09.916 10:36:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:09.916 10:36:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:09.916 10:36:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.916 10:36:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.916 10:36:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:09.916 10:36:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:09.916 10:36:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:09.916 10:36:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:10.174 10:36:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:10.174 10:36:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:10.174 10:36:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:10.174 10:36:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:10.174 10:36:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:10.174 10:36:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:10.174 10:36:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:10.174 10:36:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:10.174 10:36:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:10.174 10:36:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:10.432 10:36:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:10.432 10:36:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:10.432 10:36:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:10.432 10:36:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:10.432 10:36:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:10.432 10:36:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:10.432 10:36:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:10.432 10:36:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:10.432 10:36:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.432 10:36:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.432 10:36:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.691 10:36:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:10.691 10:36:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:10.691 10:36:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.691 10:36:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:10.691 10:36:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.691 10:36:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:10.691 10:36:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:10.691 10:36:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:10.691 10:36:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:10.691 10:36:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:10.691 10:36:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:10.691 10:36:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:10.691 10:36:31 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:10.949 10:36:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:10.949 [2024-10-08 10:36:31.415112] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:10.949 [2024-10-08 10:36:31.458715] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.949 [2024-10-08 10:36:31.458863] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.949 [2024-10-08 10:36:31.501460] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:10.949 [2024-10-08 10:36:31.501510] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:14.253 spdk_app_start Round 2 00:05:14.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:14.253 10:36:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:14.253 10:36:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:14.253 10:36:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71962 /var/tmp/spdk-nbd.sock 00:05:14.253 10:36:34 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71962 ']' 00:05:14.253 10:36:34 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:14.253 10:36:34 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:14.253 10:36:34 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:14.253 10:36:34 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:14.253 10:36:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:14.253 10:36:34 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:14.253 10:36:34 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:14.253 10:36:34 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.253 Malloc0 00:05:14.253 10:36:34 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.511 Malloc1 00:05:14.511 10:36:34 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.511 10:36:34 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.511 10:36:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.511 10:36:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:14.511 10:36:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.511 10:36:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:14.511 10:36:34 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.511 10:36:34 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.511 10:36:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.511 10:36:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:14.511 10:36:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.511 10:36:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:14.511 10:36:34 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:14.511 10:36:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:14.511 10:36:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.511 10:36:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:14.768 /dev/nbd0 00:05:14.768 10:36:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:14.768 10:36:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:14.768 10:36:35 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:14.768 10:36:35 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:14.768 10:36:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:14.768 10:36:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:14.768 10:36:35 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:14.768 10:36:35 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:14.768 10:36:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:14.768 10:36:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:14.768 10:36:35 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.768 1+0 records in 00:05:14.768 1+0 records out 00:05:14.768 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252461 s, 16.2 MB/s 00:05:14.768 10:36:35 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.768 10:36:35 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:14.768 10:36:35 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.768 10:36:35 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:14.768 10:36:35 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:14.768 10:36:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.768 10:36:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.768 10:36:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:15.026 /dev/nbd1 00:05:15.026 10:36:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:15.026 10:36:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:15.026 10:36:35 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:15.026 10:36:35 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:15.026 10:36:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:15.026 10:36:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:15.026 10:36:35 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:15.026 10:36:35 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:15.026 10:36:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:15.026 10:36:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:15.026 10:36:35 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.026 1+0 records in 00:05:15.026 1+0 records out 00:05:15.026 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322589 s, 12.7 MB/s 00:05:15.026 10:36:35 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.026 10:36:35 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:15.026 10:36:35 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.026 10:36:35 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:15.026 10:36:35 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:15.026 10:36:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.026 10:36:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.026 10:36:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.026 10:36:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.026 10:36:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.026 10:36:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:15.026 { 00:05:15.026 "nbd_device": "/dev/nbd0", 00:05:15.026 "bdev_name": "Malloc0" 00:05:15.026 }, 00:05:15.026 { 00:05:15.026 "nbd_device": "/dev/nbd1", 00:05:15.026 "bdev_name": "Malloc1" 00:05:15.026 } 00:05:15.026 ]' 00:05:15.026 10:36:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.026 10:36:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:15.026 { 00:05:15.026 "nbd_device": "/dev/nbd0", 00:05:15.026 "bdev_name": "Malloc0" 00:05:15.026 }, 00:05:15.026 { 00:05:15.026 "nbd_device": "/dev/nbd1", 00:05:15.026 "bdev_name": "Malloc1" 00:05:15.026 } 00:05:15.026 ]' 00:05:15.284 10:36:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:15.284 /dev/nbd1' 00:05:15.284 10:36:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.284 10:36:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:15.284 /dev/nbd1' 00:05:15.284 10:36:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:15.284 10:36:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:15.284 10:36:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:15.284 10:36:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:15.284 10:36:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:15.284 10:36:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.284 10:36:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.284 10:36:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:15.284 10:36:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:15.284 10:36:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:15.284 10:36:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:15.284 256+0 records in 00:05:15.284 256+0 records out 00:05:15.284 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00848199 s, 124 MB/s 00:05:15.284 10:36:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.284 10:36:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:15.284 256+0 records in 00:05:15.284 256+0 records out 00:05:15.284 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0173995 s, 60.3 MB/s 00:05:15.284 10:36:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.284 10:36:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:15.284 256+0 records in 00:05:15.284 256+0 records out 00:05:15.284 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0169765 s, 61.8 MB/s 00:05:15.284 10:36:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:15.284 10:36:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.284 10:36:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.284 10:36:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:15.284 10:36:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:15.284 10:36:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:15.284 10:36:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:15.284 10:36:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.284 10:36:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:15.284 10:36:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.284 10:36:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:15.284 10:36:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:15.284 10:36:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:15.284 10:36:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.284 10:36:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.284 10:36:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:15.284 10:36:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:15.284 10:36:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.284 10:36:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:15.543 10:36:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:15.543 10:36:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:15.543 10:36:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:15.543 10:36:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.543 10:36:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.543 10:36:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:15.543 10:36:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:15.543 10:36:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.543 10:36:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.543 10:36:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:15.543 10:36:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:15.543 10:36:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:15.543 10:36:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:15.543 10:36:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.543 10:36:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.543 10:36:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:15.543 10:36:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:15.543 10:36:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.543 10:36:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.543 10:36:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.543 10:36:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.801 10:36:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:15.801 10:36:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.801 10:36:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:15.801 10:36:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:15.801 10:36:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:15.801 10:36:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.801 10:36:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:15.801 10:36:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:15.801 10:36:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:15.801 10:36:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:15.801 10:36:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:15.801 10:36:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:15.801 10:36:36 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:16.060 10:36:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:16.422 [2024-10-08 10:36:36.644012] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:16.422 [2024-10-08 10:36:36.676425] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.422 [2024-10-08 10:36:36.676433] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.422 [2024-10-08 10:36:36.710563] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:16.422 [2024-10-08 10:36:36.710619] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:19.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:19.706 10:36:39 event.app_repeat -- event/event.sh@38 -- # waitforlisten 71962 /var/tmp/spdk-nbd.sock 00:05:19.706 10:36:39 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71962 ']' 00:05:19.706 10:36:39 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:19.706 10:36:39 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:19.706 10:36:39 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:19.706 10:36:39 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:19.706 10:36:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:19.706 10:36:39 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:19.706 10:36:39 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:19.706 10:36:39 event.app_repeat -- event/event.sh@39 -- # killprocess 71962 00:05:19.706 10:36:39 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 71962 ']' 00:05:19.706 10:36:39 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 71962 00:05:19.706 10:36:39 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:19.706 10:36:39 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:19.706 10:36:39 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71962 00:05:19.706 killing process with pid 71962 00:05:19.706 10:36:39 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:19.706 10:36:39 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:19.706 10:36:39 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71962' 00:05:19.706 10:36:39 event.app_repeat -- common/autotest_common.sh@969 -- # kill 71962 00:05:19.706 10:36:39 event.app_repeat -- common/autotest_common.sh@974 -- # wait 71962 00:05:19.706 spdk_app_start is called in Round 0. 00:05:19.706 Shutdown signal received, stop current app iteration 00:05:19.706 Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 reinitialization... 00:05:19.706 spdk_app_start is called in Round 1. 00:05:19.706 Shutdown signal received, stop current app iteration 00:05:19.706 Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 reinitialization... 00:05:19.706 spdk_app_start is called in Round 2. 00:05:19.706 Shutdown signal received, stop current app iteration 00:05:19.706 Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 reinitialization... 00:05:19.706 spdk_app_start is called in Round 3. 00:05:19.706 Shutdown signal received, stop current app iteration 00:05:19.706 ************************************ 00:05:19.706 END TEST app_repeat 00:05:19.706 ************************************ 00:05:19.706 10:36:39 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:19.706 10:36:39 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:19.706 00:05:19.706 real 0m16.795s 00:05:19.706 user 0m37.346s 00:05:19.706 sys 0m2.048s 00:05:19.706 10:36:39 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.706 10:36:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:19.706 10:36:39 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:19.706 10:36:39 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:19.706 10:36:39 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.706 10:36:39 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.706 10:36:39 event -- common/autotest_common.sh@10 -- # set +x 00:05:19.706 ************************************ 00:05:19.706 START TEST cpu_locks 00:05:19.706 ************************************ 00:05:19.706 10:36:39 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:19.706 * Looking for test storage... 00:05:19.706 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:19.706 10:36:40 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:19.706 10:36:40 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:05:19.706 10:36:40 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:19.706 10:36:40 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:19.706 10:36:40 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.706 10:36:40 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.706 10:36:40 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.706 10:36:40 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.706 10:36:40 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.706 10:36:40 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.706 10:36:40 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.706 10:36:40 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.706 10:36:40 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.706 10:36:40 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.706 10:36:40 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.706 10:36:40 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:19.706 10:36:40 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:19.706 10:36:40 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.706 10:36:40 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.706 10:36:40 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:19.706 10:36:40 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:19.706 10:36:40 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.706 10:36:40 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:19.706 10:36:40 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.706 10:36:40 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:19.706 10:36:40 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:19.706 10:36:40 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.706 10:36:40 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:19.706 10:36:40 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.706 10:36:40 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.706 10:36:40 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.706 10:36:40 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:19.706 10:36:40 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.706 10:36:40 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:19.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.706 --rc genhtml_branch_coverage=1 00:05:19.706 --rc genhtml_function_coverage=1 00:05:19.706 --rc genhtml_legend=1 00:05:19.706 --rc geninfo_all_blocks=1 00:05:19.706 --rc geninfo_unexecuted_blocks=1 00:05:19.706 00:05:19.706 ' 00:05:19.706 10:36:40 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:19.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.706 --rc genhtml_branch_coverage=1 00:05:19.706 --rc genhtml_function_coverage=1 00:05:19.706 --rc genhtml_legend=1 00:05:19.706 --rc geninfo_all_blocks=1 00:05:19.706 --rc geninfo_unexecuted_blocks=1 00:05:19.706 00:05:19.706 ' 00:05:19.706 10:36:40 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:19.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.706 --rc genhtml_branch_coverage=1 00:05:19.706 --rc genhtml_function_coverage=1 00:05:19.706 --rc genhtml_legend=1 00:05:19.706 --rc geninfo_all_blocks=1 00:05:19.706 --rc geninfo_unexecuted_blocks=1 00:05:19.706 00:05:19.706 ' 00:05:19.706 10:36:40 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:19.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.706 --rc genhtml_branch_coverage=1 00:05:19.706 --rc genhtml_function_coverage=1 00:05:19.706 --rc genhtml_legend=1 00:05:19.706 --rc geninfo_all_blocks=1 00:05:19.706 --rc geninfo_unexecuted_blocks=1 00:05:19.706 00:05:19.706 ' 00:05:19.706 10:36:40 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:19.706 10:36:40 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:19.707 10:36:40 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:19.707 10:36:40 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:19.707 10:36:40 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.707 10:36:40 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.707 10:36:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.707 ************************************ 00:05:19.707 START TEST default_locks 00:05:19.707 ************************************ 00:05:19.707 10:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:19.707 10:36:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=72387 00:05:19.707 10:36:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 72387 00:05:19.707 10:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 72387 ']' 00:05:19.707 10:36:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.707 10:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.707 10:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:19.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.707 10:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.707 10:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:19.707 10:36:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.707 [2024-10-08 10:36:40.181104] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:05:19.707 [2024-10-08 10:36:40.181901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72387 ] 00:05:19.967 [2024-10-08 10:36:40.310353] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:19.967 [2024-10-08 10:36:40.328485] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.967 [2024-10-08 10:36:40.360676] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.537 10:36:41 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:20.537 10:36:41 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:20.537 10:36:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 72387 00:05:20.537 10:36:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 72387 00:05:20.537 10:36:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:20.798 10:36:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 72387 00:05:20.798 10:36:41 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 72387 ']' 00:05:20.798 10:36:41 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 72387 00:05:20.798 10:36:41 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:20.798 10:36:41 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:20.798 10:36:41 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72387 00:05:20.798 10:36:41 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:20.798 10:36:41 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:20.798 killing process with pid 72387 00:05:20.798 10:36:41 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72387' 00:05:20.798 10:36:41 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 72387 00:05:20.798 10:36:41 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 72387 00:05:21.060 10:36:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 72387 00:05:21.060 10:36:41 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:21.060 10:36:41 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 72387 00:05:21.060 10:36:41 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:21.060 10:36:41 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:21.060 10:36:41 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:21.060 10:36:41 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:21.060 10:36:41 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 72387 00:05:21.060 10:36:41 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 72387 ']' 00:05:21.060 10:36:41 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.060 10:36:41 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:21.060 10:36:41 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.060 10:36:41 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:21.060 ERROR: process (pid: 72387) is no longer running 00:05:21.060 10:36:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.060 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (72387) - No such process 00:05:21.060 10:36:41 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:21.060 10:36:41 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:21.060 10:36:41 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:21.060 10:36:41 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:21.060 10:36:41 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:21.060 10:36:41 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:21.060 10:36:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:21.060 10:36:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:21.060 10:36:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:21.060 10:36:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:21.060 00:05:21.060 real 0m1.387s 00:05:21.060 user 0m1.422s 00:05:21.060 sys 0m0.399s 00:05:21.060 ************************************ 00:05:21.060 END TEST default_locks 00:05:21.060 ************************************ 00:05:21.060 10:36:41 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:21.060 10:36:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.060 10:36:41 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:21.060 10:36:41 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.060 10:36:41 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.060 10:36:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.060 ************************************ 00:05:21.060 START TEST default_locks_via_rpc 00:05:21.060 ************************************ 00:05:21.060 10:36:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:21.060 10:36:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=72429 00:05:21.060 10:36:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 72429 00:05:21.060 10:36:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72429 ']' 00:05:21.060 10:36:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.060 10:36:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:21.060 10:36:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.060 10:36:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:21.060 10:36:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.060 10:36:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.060 [2024-10-08 10:36:41.625048] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:05:21.060 [2024-10-08 10:36:41.625168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72429 ] 00:05:21.320 [2024-10-08 10:36:41.753257] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:21.320 [2024-10-08 10:36:41.770503] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.320 [2024-10-08 10:36:41.803316] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.892 10:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:21.892 10:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:21.892 10:36:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:21.892 10:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.892 10:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.892 10:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.892 10:36:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:21.892 10:36:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:21.892 10:36:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:21.892 10:36:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:21.892 10:36:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:21.892 10:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.892 10:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.152 10:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.152 10:36:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 72429 00:05:22.152 10:36:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 72429 00:05:22.152 10:36:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:22.152 10:36:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 72429 00:05:22.152 10:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 72429 ']' 00:05:22.152 10:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 72429 00:05:22.152 10:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:22.152 10:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:22.152 10:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72429 00:05:22.152 killing process with pid 72429 00:05:22.152 10:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:22.152 10:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:22.152 10:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72429' 00:05:22.152 10:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 72429 00:05:22.152 10:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 72429 00:05:22.412 ************************************ 00:05:22.412 END TEST default_locks_via_rpc 00:05:22.412 00:05:22.412 real 0m1.376s 00:05:22.412 user 0m1.405s 00:05:22.412 sys 0m0.406s 00:05:22.412 10:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.412 10:36:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.412 ************************************ 00:05:22.412 10:36:42 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:22.412 10:36:42 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.412 10:36:42 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.412 10:36:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.673 ************************************ 00:05:22.673 START TEST non_locking_app_on_locked_coremask 00:05:22.673 ************************************ 00:05:22.673 10:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:22.673 10:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=72481 00:05:22.673 10:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:22.673 10:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 72481 /var/tmp/spdk.sock 00:05:22.673 10:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72481 ']' 00:05:22.673 10:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.673 10:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:22.673 10:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.673 10:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:22.673 10:36:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.673 [2024-10-08 10:36:43.057604] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:05:22.673 [2024-10-08 10:36:43.057722] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72481 ] 00:05:22.673 [2024-10-08 10:36:43.185978] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:22.673 [2024-10-08 10:36:43.207065] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.673 [2024-10-08 10:36:43.239255] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.611 10:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:23.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.611 10:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:23.611 10:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=72491 00:05:23.611 10:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:23.611 10:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 72491 /var/tmp/spdk2.sock 00:05:23.611 10:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72491 ']' 00:05:23.611 10:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.611 10:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:23.611 10:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.611 10:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:23.611 10:36:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.611 [2024-10-08 10:36:43.960411] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:05:23.611 [2024-10-08 10:36:43.960521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72491 ] 00:05:23.611 [2024-10-08 10:36:44.087556] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:23.611 [2024-10-08 10:36:44.111991] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:23.611 [2024-10-08 10:36:44.112027] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.611 [2024-10-08 10:36:44.176337] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.550 10:36:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:24.550 10:36:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:24.550 10:36:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 72481 00:05:24.550 10:36:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72481 00:05:24.550 10:36:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:24.808 10:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 72481 00:05:24.808 10:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 72481 ']' 00:05:24.808 10:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 72481 00:05:24.808 10:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:24.808 10:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:24.808 10:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72481 00:05:24.808 killing process with pid 72481 00:05:24.808 10:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:24.808 10:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:24.808 10:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72481' 00:05:24.808 10:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 72481 00:05:24.808 10:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 72481 00:05:25.375 10:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 72491 00:05:25.375 10:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 72491 ']' 00:05:25.375 10:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 72491 00:05:25.375 10:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:25.375 10:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:25.375 10:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72491 00:05:25.375 killing process with pid 72491 00:05:25.375 10:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:25.375 10:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:25.375 10:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72491' 00:05:25.375 10:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 72491 00:05:25.375 10:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 72491 00:05:25.375 ************************************ 00:05:25.375 END TEST non_locking_app_on_locked_coremask 00:05:25.375 ************************************ 00:05:25.375 00:05:25.375 real 0m2.937s 00:05:25.375 user 0m3.233s 00:05:25.375 sys 0m0.824s 00:05:25.375 10:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.375 10:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.634 10:36:45 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:25.634 10:36:45 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.634 10:36:45 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.634 10:36:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.634 ************************************ 00:05:25.634 START TEST locking_app_on_unlocked_coremask 00:05:25.634 ************************************ 00:05:25.634 10:36:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:25.634 10:36:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=72554 00:05:25.634 10:36:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:25.634 10:36:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 72554 /var/tmp/spdk.sock 00:05:25.634 10:36:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72554 ']' 00:05:25.634 10:36:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.634 10:36:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:25.634 10:36:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.634 10:36:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:25.634 10:36:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.634 [2024-10-08 10:36:46.050627] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:05:25.634 [2024-10-08 10:36:46.050753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72554 ] 00:05:25.634 [2024-10-08 10:36:46.178956] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:25.634 [2024-10-08 10:36:46.195316] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:25.634 [2024-10-08 10:36:46.195341] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.892 [2024-10-08 10:36:46.230449] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:26.459 10:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:26.459 10:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:26.459 10:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:26.459 10:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=72560 00:05:26.459 10:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 72560 /var/tmp/spdk2.sock 00:05:26.459 10:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72560 ']' 00:05:26.459 10:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:26.459 10:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:26.459 10:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:26.459 10:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:26.459 10:36:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.459 [2024-10-08 10:36:46.946543] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:05:26.459 [2024-10-08 10:36:46.946774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72560 ] 00:05:26.717 [2024-10-08 10:36:47.073658] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:26.717 [2024-10-08 10:36:47.091065] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.717 [2024-10-08 10:36:47.147269] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.283 10:36:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:27.283 10:36:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:27.283 10:36:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 72560 00:05:27.283 10:36:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72560 00:05:27.283 10:36:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:27.541 10:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 72554 00:05:27.541 10:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 72554 ']' 00:05:27.541 10:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 72554 00:05:27.541 10:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:27.541 10:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:27.541 10:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72554 00:05:27.800 killing process with pid 72554 00:05:27.800 10:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:27.800 10:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:27.800 10:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72554' 00:05:27.800 10:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 72554 00:05:27.800 10:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 72554 00:05:28.058 10:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 72560 00:05:28.058 10:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 72560 ']' 00:05:28.058 10:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 72560 00:05:28.058 10:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:28.058 10:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:28.058 10:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72560 00:05:28.058 killing process with pid 72560 00:05:28.058 10:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:28.058 10:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:28.058 10:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72560' 00:05:28.058 10:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 72560 00:05:28.058 10:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 72560 00:05:28.316 00:05:28.316 real 0m2.842s 00:05:28.316 user 0m3.151s 00:05:28.316 sys 0m0.757s 00:05:28.316 10:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.316 10:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.316 ************************************ 00:05:28.316 END TEST locking_app_on_unlocked_coremask 00:05:28.316 ************************************ 00:05:28.316 10:36:48 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:28.316 10:36:48 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.316 10:36:48 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.316 10:36:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.316 ************************************ 00:05:28.316 START TEST locking_app_on_locked_coremask 00:05:28.316 ************************************ 00:05:28.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.316 10:36:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:28.316 10:36:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=72618 00:05:28.316 10:36:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 72618 /var/tmp/spdk.sock 00:05:28.316 10:36:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72618 ']' 00:05:28.316 10:36:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.316 10:36:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:28.316 10:36:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.316 10:36:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.316 10:36:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:28.316 10:36:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.575 [2024-10-08 10:36:48.942003] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:05:28.575 [2024-10-08 10:36:48.942455] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72618 ] 00:05:28.575 [2024-10-08 10:36:49.065004] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:28.575 [2024-10-08 10:36:49.083278] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.575 [2024-10-08 10:36:49.112097] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.509 10:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:29.509 10:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:29.509 10:36:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=72634 00:05:29.509 10:36:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 72634 /var/tmp/spdk2.sock 00:05:29.509 10:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:29.509 10:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 72634 /var/tmp/spdk2.sock 00:05:29.509 10:36:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:29.509 10:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:29.509 10:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:29.509 10:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:29.509 10:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:29.509 10:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 72634 /var/tmp/spdk2.sock 00:05:29.509 10:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72634 ']' 00:05:29.509 10:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:29.509 10:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:29.509 10:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:29.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:29.509 10:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:29.509 10:36:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.509 [2024-10-08 10:36:49.863266] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:05:29.509 [2024-10-08 10:36:49.863558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72634 ] 00:05:29.509 [2024-10-08 10:36:49.993190] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:29.509 [2024-10-08 10:36:50.010767] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 72618 has claimed it. 00:05:29.509 [2024-10-08 10:36:50.010814] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:30.077 ERROR: process (pid: 72634) is no longer running 00:05:30.077 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (72634) - No such process 00:05:30.077 10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.077 10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:30.077 10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:30.077 10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:30.077 10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:30.077 10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:30.077 10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 72618 00:05:30.077 10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72618 00:05:30.077 10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:30.336 10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 72618 00:05:30.336 10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 72618 ']' 00:05:30.336 10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 72618 00:05:30.336 10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:30.336 10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:30.336 10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72618 00:05:30.336 killing process with pid 72618 00:05:30.336 10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:30.336 10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:30.336 10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72618' 00:05:30.336 10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 72618 00:05:30.336 10:36:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 72618 00:05:30.596 ************************************ 00:05:30.596 END TEST locking_app_on_locked_coremask 00:05:30.596 ************************************ 00:05:30.596 00:05:30.596 real 0m2.138s 00:05:30.596 user 0m2.371s 00:05:30.596 sys 0m0.555s 00:05:30.596 10:36:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.596 10:36:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.596 10:36:51 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:30.596 10:36:51 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.596 10:36:51 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.596 10:36:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.596 ************************************ 00:05:30.596 START TEST locking_overlapped_coremask 00:05:30.596 ************************************ 00:05:30.596 10:36:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:30.596 10:36:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=72686 00:05:30.596 10:36:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 72686 /var/tmp/spdk.sock 00:05:30.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.596 10:36:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 72686 ']' 00:05:30.596 10:36:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.596 10:36:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:30.596 10:36:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.596 10:36:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:30.596 10:36:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.596 10:36:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:30.596 [2024-10-08 10:36:51.144708] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:05:30.596 [2024-10-08 10:36:51.144840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72686 ] 00:05:30.856 [2024-10-08 10:36:51.274579] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:30.856 [2024-10-08 10:36:51.291109] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:30.856 [2024-10-08 10:36:51.321226] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.856 [2024-10-08 10:36:51.321497] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.856 [2024-10-08 10:36:51.321575] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.423 10:36:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:31.423 10:36:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:31.423 10:36:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:31.423 10:36:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=72694 00:05:31.423 10:36:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 72694 /var/tmp/spdk2.sock 00:05:31.423 10:36:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:31.423 10:36:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 72694 /var/tmp/spdk2.sock 00:05:31.423 10:36:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:31.423 10:36:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:31.423 10:36:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:31.423 10:36:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:31.423 10:36:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 72694 /var/tmp/spdk2.sock 00:05:31.423 10:36:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 72694 ']' 00:05:31.423 10:36:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:31.423 10:36:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:31.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:31.423 10:36:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:31.423 10:36:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:31.423 10:36:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.721 [2024-10-08 10:36:52.044378] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:05:31.721 [2024-10-08 10:36:52.044493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72694 ] 00:05:31.721 [2024-10-08 10:36:52.178191] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:31.721 [2024-10-08 10:36:52.199656] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72686 has claimed it. 00:05:31.721 [2024-10-08 10:36:52.199703] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:32.314 ERROR: process (pid: 72694) is no longer running 00:05:32.314 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (72694) - No such process 00:05:32.314 10:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.314 10:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:32.314 10:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:32.314 10:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:32.314 10:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:32.314 10:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:32.314 10:36:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:32.314 10:36:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:32.314 10:36:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:32.314 10:36:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:32.314 10:36:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 72686 00:05:32.314 10:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 72686 ']' 00:05:32.314 10:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 72686 00:05:32.314 10:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:32.314 10:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:32.314 10:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72686 00:05:32.314 10:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:32.314 10:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:32.315 10:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72686' 00:05:32.315 killing process with pid 72686 00:05:32.315 10:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 72686 00:05:32.315 10:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 72686 00:05:32.575 00:05:32.575 real 0m1.874s 00:05:32.575 user 0m5.209s 00:05:32.575 sys 0m0.369s 00:05:32.575 10:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.575 10:36:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.575 ************************************ 00:05:32.575 END TEST locking_overlapped_coremask 00:05:32.575 ************************************ 00:05:32.575 10:36:52 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:32.575 10:36:52 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.575 10:36:52 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.575 10:36:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.575 ************************************ 00:05:32.575 START TEST locking_overlapped_coremask_via_rpc 00:05:32.575 ************************************ 00:05:32.575 10:36:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:32.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.575 10:36:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=72736 00:05:32.575 10:36:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 72736 /var/tmp/spdk.sock 00:05:32.575 10:36:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72736 ']' 00:05:32.575 10:36:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.575 10:36:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.575 10:36:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:32.575 10:36:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.575 10:36:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.575 10:36:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.575 [2024-10-08 10:36:53.068622] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:05:32.575 [2024-10-08 10:36:53.068715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72736 ] 00:05:32.834 [2024-10-08 10:36:53.191559] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:32.834 [2024-10-08 10:36:53.208074] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:32.834 [2024-10-08 10:36:53.208174] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:32.834 [2024-10-08 10:36:53.239665] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.834 [2024-10-08 10:36:53.239901] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.834 [2024-10-08 10:36:53.239958] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:33.400 10:36:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:33.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:33.400 10:36:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:33.400 10:36:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=72754 00:05:33.400 10:36:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 72754 /var/tmp/spdk2.sock 00:05:33.400 10:36:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72754 ']' 00:05:33.400 10:36:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:33.400 10:36:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:33.400 10:36:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:33.400 10:36:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:33.400 10:36:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:33.400 10:36:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.657 [2024-10-08 10:36:53.986272] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:05:33.657 [2024-10-08 10:36:53.986563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72754 ] 00:05:33.657 [2024-10-08 10:36:54.116921] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:33.657 [2024-10-08 10:36:54.140318] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:33.657 [2024-10-08 10:36:54.140353] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:33.657 [2024-10-08 10:36:54.204996] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:05:33.657 [2024-10-08 10:36:54.205104] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:33.657 [2024-10-08 10:36:54.205168] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:05:34.592 10:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:34.592 10:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:34.592 10:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:34.592 10:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.592 10:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.592 10:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.592 10:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:34.592 10:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:34.592 10:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:34.592 10:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:34.592 10:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:34.592 10:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:34.592 10:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:34.592 10:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:34.592 10:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.592 10:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.592 [2024-10-08 10:36:54.849924] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72736 has claimed it. 00:05:34.592 request: 00:05:34.592 { 00:05:34.592 "method": "framework_enable_cpumask_locks", 00:05:34.592 "req_id": 1 00:05:34.592 } 00:05:34.592 Got JSON-RPC error response 00:05:34.592 response: 00:05:34.592 { 00:05:34.592 "code": -32603, 00:05:34.592 "message": "Failed to claim CPU core: 2" 00:05:34.592 } 00:05:34.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.592 10:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:34.592 10:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:34.592 10:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:34.592 10:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:34.592 10:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:34.592 10:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 72736 /var/tmp/spdk.sock 00:05:34.592 10:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72736 ']' 00:05:34.592 10:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.592 10:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:34.592 10:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.592 10:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:34.592 10:36:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:34.592 10:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:34.592 10:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:34.592 10:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 72754 /var/tmp/spdk2.sock 00:05:34.592 10:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72754 ']' 00:05:34.592 10:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:34.592 10:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:34.592 10:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:34.592 10:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:34.592 10:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.851 ************************************ 00:05:34.851 END TEST locking_overlapped_coremask_via_rpc 00:05:34.851 ************************************ 00:05:34.851 10:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:34.851 10:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:34.851 10:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:34.851 10:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:34.851 10:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:34.851 10:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:34.851 00:05:34.851 real 0m2.264s 00:05:34.851 user 0m1.073s 00:05:34.851 sys 0m0.118s 00:05:34.851 10:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.851 10:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.851 10:36:55 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:34.851 10:36:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72736 ]] 00:05:34.851 10:36:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72736 00:05:34.851 10:36:55 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72736 ']' 00:05:34.852 10:36:55 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72736 00:05:34.852 10:36:55 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:34.852 10:36:55 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:34.852 10:36:55 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72736 00:05:34.852 killing process with pid 72736 00:05:34.852 10:36:55 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:34.852 10:36:55 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:34.852 10:36:55 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72736' 00:05:34.852 10:36:55 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 72736 00:05:34.852 10:36:55 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 72736 00:05:35.110 10:36:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72754 ]] 00:05:35.110 10:36:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72754 00:05:35.110 10:36:55 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72754 ']' 00:05:35.110 10:36:55 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72754 00:05:35.110 10:36:55 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:35.110 10:36:55 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:35.110 10:36:55 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72754 00:05:35.110 killing process with pid 72754 00:05:35.110 10:36:55 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:35.110 10:36:55 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:35.110 10:36:55 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72754' 00:05:35.110 10:36:55 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 72754 00:05:35.110 10:36:55 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 72754 00:05:35.370 10:36:55 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:35.370 Process with pid 72736 is not found 00:05:35.370 Process with pid 72754 is not found 00:05:35.370 10:36:55 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:35.370 10:36:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72736 ]] 00:05:35.370 10:36:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72736 00:05:35.370 10:36:55 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72736 ']' 00:05:35.370 10:36:55 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72736 00:05:35.370 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (72736) - No such process 00:05:35.370 10:36:55 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 72736 is not found' 00:05:35.370 10:36:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72754 ]] 00:05:35.370 10:36:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72754 00:05:35.370 10:36:55 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72754 ']' 00:05:35.370 10:36:55 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72754 00:05:35.370 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (72754) - No such process 00:05:35.370 10:36:55 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 72754 is not found' 00:05:35.370 10:36:55 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:35.370 ************************************ 00:05:35.370 END TEST cpu_locks 00:05:35.370 ************************************ 00:05:35.370 00:05:35.370 real 0m15.877s 00:05:35.370 user 0m28.059s 00:05:35.370 sys 0m4.125s 00:05:35.370 10:36:55 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.370 10:36:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.370 ************************************ 00:05:35.370 END TEST event 00:05:35.370 ************************************ 00:05:35.370 00:05:35.370 real 0m42.265s 00:05:35.370 user 1m22.098s 00:05:35.370 sys 0m7.060s 00:05:35.370 10:36:55 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.370 10:36:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.370 10:36:55 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:35.370 10:36:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.370 10:36:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.370 10:36:55 -- common/autotest_common.sh@10 -- # set +x 00:05:35.370 ************************************ 00:05:35.370 START TEST thread 00:05:35.370 ************************************ 00:05:35.370 10:36:55 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:35.629 * Looking for test storage... 00:05:35.629 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:35.629 10:36:56 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:35.629 10:36:56 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:05:35.629 10:36:56 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:35.629 10:36:56 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:35.629 10:36:56 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.630 10:36:56 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.630 10:36:56 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.630 10:36:56 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.630 10:36:56 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.630 10:36:56 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.630 10:36:56 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.630 10:36:56 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.630 10:36:56 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.630 10:36:56 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.630 10:36:56 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.630 10:36:56 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:35.630 10:36:56 thread -- scripts/common.sh@345 -- # : 1 00:05:35.630 10:36:56 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.630 10:36:56 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.630 10:36:56 thread -- scripts/common.sh@365 -- # decimal 1 00:05:35.630 10:36:56 thread -- scripts/common.sh@353 -- # local d=1 00:05:35.630 10:36:56 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.630 10:36:56 thread -- scripts/common.sh@355 -- # echo 1 00:05:35.630 10:36:56 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.630 10:36:56 thread -- scripts/common.sh@366 -- # decimal 2 00:05:35.630 10:36:56 thread -- scripts/common.sh@353 -- # local d=2 00:05:35.630 10:36:56 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.630 10:36:56 thread -- scripts/common.sh@355 -- # echo 2 00:05:35.630 10:36:56 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.630 10:36:56 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.630 10:36:56 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.630 10:36:56 thread -- scripts/common.sh@368 -- # return 0 00:05:35.630 10:36:56 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.630 10:36:56 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:35.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.630 --rc genhtml_branch_coverage=1 00:05:35.630 --rc genhtml_function_coverage=1 00:05:35.630 --rc genhtml_legend=1 00:05:35.630 --rc geninfo_all_blocks=1 00:05:35.630 --rc geninfo_unexecuted_blocks=1 00:05:35.630 00:05:35.630 ' 00:05:35.630 10:36:56 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:35.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.630 --rc genhtml_branch_coverage=1 00:05:35.630 --rc genhtml_function_coverage=1 00:05:35.630 --rc genhtml_legend=1 00:05:35.630 --rc geninfo_all_blocks=1 00:05:35.630 --rc geninfo_unexecuted_blocks=1 00:05:35.630 00:05:35.630 ' 00:05:35.630 10:36:56 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:35.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.630 --rc genhtml_branch_coverage=1 00:05:35.630 --rc genhtml_function_coverage=1 00:05:35.630 --rc genhtml_legend=1 00:05:35.630 --rc geninfo_all_blocks=1 00:05:35.630 --rc geninfo_unexecuted_blocks=1 00:05:35.630 00:05:35.630 ' 00:05:35.630 10:36:56 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:35.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.630 --rc genhtml_branch_coverage=1 00:05:35.630 --rc genhtml_function_coverage=1 00:05:35.630 --rc genhtml_legend=1 00:05:35.630 --rc geninfo_all_blocks=1 00:05:35.630 --rc geninfo_unexecuted_blocks=1 00:05:35.630 00:05:35.630 ' 00:05:35.630 10:36:56 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:35.630 10:36:56 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:35.630 10:36:56 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.630 10:36:56 thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.630 ************************************ 00:05:35.630 START TEST thread_poller_perf 00:05:35.630 ************************************ 00:05:35.630 10:36:56 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:35.630 [2024-10-08 10:36:56.108471] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:05:35.630 [2024-10-08 10:36:56.108675] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72881 ] 00:05:35.891 [2024-10-08 10:36:56.236459] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:35.891 [2024-10-08 10:36:56.256977] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.891 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:35.891 [2024-10-08 10:36:56.292029] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.833 [2024-10-08T10:36:57.410Z] ====================================== 00:05:36.833 [2024-10-08T10:36:57.410Z] busy:2615162126 (cyc) 00:05:36.833 [2024-10-08T10:36:57.410Z] total_run_count: 307000 00:05:36.833 [2024-10-08T10:36:57.410Z] tsc_hz: 2600000000 (cyc) 00:05:36.833 [2024-10-08T10:36:57.410Z] ====================================== 00:05:36.833 [2024-10-08T10:36:57.410Z] poller_cost: 8518 (cyc), 3276 (nsec) 00:05:36.833 00:05:36.833 real 0m1.306s 00:05:36.833 user 0m1.122s 00:05:36.833 sys 0m0.075s 00:05:36.833 10:36:57 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.833 ************************************ 00:05:36.833 END TEST thread_poller_perf 00:05:36.833 ************************************ 00:05:36.833 10:36:57 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:37.094 10:36:57 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:37.094 10:36:57 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:37.094 10:36:57 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.094 10:36:57 thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.094 ************************************ 00:05:37.094 START TEST thread_poller_perf 00:05:37.094 ************************************ 00:05:37.094 10:36:57 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:37.094 [2024-10-08 10:36:57.481486] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:05:37.094 [2024-10-08 10:36:57.481782] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72923 ] 00:05:37.094 [2024-10-08 10:36:57.612297] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:37.094 [2024-10-08 10:36:57.632865] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.354 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:37.354 [2024-10-08 10:36:57.683163] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.296 [2024-10-08T10:36:58.873Z] ====================================== 00:05:38.296 [2024-10-08T10:36:58.873Z] busy:2603903098 (cyc) 00:05:38.296 [2024-10-08T10:36:58.873Z] total_run_count: 3929000 00:05:38.296 [2024-10-08T10:36:58.873Z] tsc_hz: 2600000000 (cyc) 00:05:38.296 [2024-10-08T10:36:58.873Z] ====================================== 00:05:38.296 [2024-10-08T10:36:58.873Z] poller_cost: 662 (cyc), 254 (nsec) 00:05:38.296 ************************************ 00:05:38.296 END TEST thread_poller_perf 00:05:38.296 ************************************ 00:05:38.296 00:05:38.296 real 0m1.315s 00:05:38.296 user 0m1.118s 00:05:38.296 sys 0m0.087s 00:05:38.296 10:36:58 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.296 10:36:58 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:38.296 10:36:58 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:38.296 00:05:38.296 real 0m2.883s 00:05:38.296 user 0m2.345s 00:05:38.296 sys 0m0.291s 00:05:38.296 ************************************ 00:05:38.296 END TEST thread 00:05:38.296 ************************************ 00:05:38.296 10:36:58 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.296 10:36:58 thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.296 10:36:58 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:38.296 10:36:58 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:38.296 10:36:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:38.296 10:36:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.296 10:36:58 -- common/autotest_common.sh@10 -- # set +x 00:05:38.296 ************************************ 00:05:38.296 START TEST app_cmdline 00:05:38.296 ************************************ 00:05:38.296 10:36:58 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:38.558 * Looking for test storage... 00:05:38.558 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:38.558 10:36:58 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:38.558 10:36:58 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:38.558 10:36:58 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:05:38.558 10:36:58 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:38.558 10:36:58 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.558 10:36:58 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.558 10:36:58 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.558 10:36:58 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.558 10:36:58 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.558 10:36:58 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.558 10:36:58 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.558 10:36:58 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.558 10:36:58 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.558 10:36:58 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.558 10:36:58 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.558 10:36:58 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:38.558 10:36:58 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:38.558 10:36:58 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.558 10:36:58 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.558 10:36:58 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:38.558 10:36:58 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:38.558 10:36:58 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.558 10:36:58 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:38.558 10:36:58 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.558 10:36:58 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:38.558 10:36:58 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:38.558 10:36:59 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.558 10:36:59 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:38.558 10:36:59 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.558 10:36:59 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.558 10:36:59 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.558 10:36:59 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:38.558 10:36:59 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.558 10:36:59 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:38.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.558 --rc genhtml_branch_coverage=1 00:05:38.558 --rc genhtml_function_coverage=1 00:05:38.558 --rc genhtml_legend=1 00:05:38.558 --rc geninfo_all_blocks=1 00:05:38.558 --rc geninfo_unexecuted_blocks=1 00:05:38.558 00:05:38.558 ' 00:05:38.558 10:36:59 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:38.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.558 --rc genhtml_branch_coverage=1 00:05:38.558 --rc genhtml_function_coverage=1 00:05:38.558 --rc genhtml_legend=1 00:05:38.558 --rc geninfo_all_blocks=1 00:05:38.558 --rc geninfo_unexecuted_blocks=1 00:05:38.558 00:05:38.558 ' 00:05:38.558 10:36:59 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:38.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.558 --rc genhtml_branch_coverage=1 00:05:38.558 --rc genhtml_function_coverage=1 00:05:38.558 --rc genhtml_legend=1 00:05:38.558 --rc geninfo_all_blocks=1 00:05:38.558 --rc geninfo_unexecuted_blocks=1 00:05:38.558 00:05:38.558 ' 00:05:38.558 10:36:59 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:38.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.558 --rc genhtml_branch_coverage=1 00:05:38.558 --rc genhtml_function_coverage=1 00:05:38.558 --rc genhtml_legend=1 00:05:38.558 --rc geninfo_all_blocks=1 00:05:38.558 --rc geninfo_unexecuted_blocks=1 00:05:38.558 00:05:38.558 ' 00:05:38.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.558 10:36:59 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:38.558 10:36:59 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=73001 00:05:38.558 10:36:59 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 73001 00:05:38.558 10:36:59 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 73001 ']' 00:05:38.558 10:36:59 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.558 10:36:59 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:38.558 10:36:59 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.558 10:36:59 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:38.558 10:36:59 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:38.558 10:36:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:38.558 [2024-10-08 10:36:59.069008] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:05:38.558 [2024-10-08 10:36:59.069527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73001 ] 00:05:38.819 [2024-10-08 10:36:59.192892] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:38.820 [2024-10-08 10:36:59.213522] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.820 [2024-10-08 10:36:59.262978] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.399 10:36:59 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.399 10:36:59 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:39.399 10:36:59 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:39.659 { 00:05:39.659 "version": "SPDK v25.01-pre git sha1 92108e0a2", 00:05:39.659 "fields": { 00:05:39.659 "major": 25, 00:05:39.659 "minor": 1, 00:05:39.659 "patch": 0, 00:05:39.659 "suffix": "-pre", 00:05:39.659 "commit": "92108e0a2" 00:05:39.659 } 00:05:39.660 } 00:05:39.660 10:37:00 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:39.660 10:37:00 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:39.660 10:37:00 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:39.660 10:37:00 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:39.660 10:37:00 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:39.660 10:37:00 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.660 10:37:00 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:39.660 10:37:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:39.660 10:37:00 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:39.660 10:37:00 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.660 10:37:00 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:39.660 10:37:00 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:39.660 10:37:00 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:39.660 10:37:00 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:39.660 10:37:00 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:39.660 10:37:00 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:39.660 10:37:00 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.660 10:37:00 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:39.660 10:37:00 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.660 10:37:00 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:39.660 10:37:00 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.660 10:37:00 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:39.660 10:37:00 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:39.660 10:37:00 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:39.921 request: 00:05:39.921 { 00:05:39.921 "method": "env_dpdk_get_mem_stats", 00:05:39.921 "req_id": 1 00:05:39.921 } 00:05:39.921 Got JSON-RPC error response 00:05:39.921 response: 00:05:39.921 { 00:05:39.921 "code": -32601, 00:05:39.921 "message": "Method not found" 00:05:39.921 } 00:05:39.921 10:37:00 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:39.921 10:37:00 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:39.921 10:37:00 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:39.921 10:37:00 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:39.921 10:37:00 app_cmdline -- app/cmdline.sh@1 -- # killprocess 73001 00:05:39.921 10:37:00 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 73001 ']' 00:05:39.921 10:37:00 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 73001 00:05:39.921 10:37:00 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:39.921 10:37:00 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:39.921 10:37:00 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73001 00:05:39.921 killing process with pid 73001 00:05:39.921 10:37:00 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:39.921 10:37:00 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:39.921 10:37:00 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73001' 00:05:39.921 10:37:00 app_cmdline -- common/autotest_common.sh@969 -- # kill 73001 00:05:39.921 10:37:00 app_cmdline -- common/autotest_common.sh@974 -- # wait 73001 00:05:40.181 00:05:40.181 real 0m1.871s 00:05:40.181 user 0m2.185s 00:05:40.181 sys 0m0.468s 00:05:40.181 10:37:00 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.181 ************************************ 00:05:40.181 END TEST app_cmdline 00:05:40.181 ************************************ 00:05:40.181 10:37:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:40.441 10:37:00 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:40.441 10:37:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.441 10:37:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.441 10:37:00 -- common/autotest_common.sh@10 -- # set +x 00:05:40.441 ************************************ 00:05:40.441 START TEST version 00:05:40.441 ************************************ 00:05:40.441 10:37:00 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:40.441 * Looking for test storage... 00:05:40.441 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:40.441 10:37:00 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:40.441 10:37:00 version -- common/autotest_common.sh@1681 -- # lcov --version 00:05:40.441 10:37:00 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:40.441 10:37:00 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:40.441 10:37:00 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.441 10:37:00 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.441 10:37:00 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.441 10:37:00 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.441 10:37:00 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.441 10:37:00 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.441 10:37:00 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.441 10:37:00 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.441 10:37:00 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.441 10:37:00 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.441 10:37:00 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.441 10:37:00 version -- scripts/common.sh@344 -- # case "$op" in 00:05:40.441 10:37:00 version -- scripts/common.sh@345 -- # : 1 00:05:40.441 10:37:00 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.441 10:37:00 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.441 10:37:00 version -- scripts/common.sh@365 -- # decimal 1 00:05:40.441 10:37:00 version -- scripts/common.sh@353 -- # local d=1 00:05:40.441 10:37:00 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.441 10:37:00 version -- scripts/common.sh@355 -- # echo 1 00:05:40.441 10:37:00 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.441 10:37:00 version -- scripts/common.sh@366 -- # decimal 2 00:05:40.441 10:37:00 version -- scripts/common.sh@353 -- # local d=2 00:05:40.441 10:37:00 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.441 10:37:00 version -- scripts/common.sh@355 -- # echo 2 00:05:40.441 10:37:00 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.441 10:37:00 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.441 10:37:00 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.441 10:37:00 version -- scripts/common.sh@368 -- # return 0 00:05:40.441 10:37:00 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.441 10:37:00 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:40.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.441 --rc genhtml_branch_coverage=1 00:05:40.441 --rc genhtml_function_coverage=1 00:05:40.441 --rc genhtml_legend=1 00:05:40.441 --rc geninfo_all_blocks=1 00:05:40.441 --rc geninfo_unexecuted_blocks=1 00:05:40.441 00:05:40.441 ' 00:05:40.441 10:37:00 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:40.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.441 --rc genhtml_branch_coverage=1 00:05:40.441 --rc genhtml_function_coverage=1 00:05:40.441 --rc genhtml_legend=1 00:05:40.441 --rc geninfo_all_blocks=1 00:05:40.441 --rc geninfo_unexecuted_blocks=1 00:05:40.441 00:05:40.441 ' 00:05:40.441 10:37:00 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:40.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.441 --rc genhtml_branch_coverage=1 00:05:40.441 --rc genhtml_function_coverage=1 00:05:40.441 --rc genhtml_legend=1 00:05:40.441 --rc geninfo_all_blocks=1 00:05:40.441 --rc geninfo_unexecuted_blocks=1 00:05:40.441 00:05:40.441 ' 00:05:40.441 10:37:00 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:40.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.441 --rc genhtml_branch_coverage=1 00:05:40.441 --rc genhtml_function_coverage=1 00:05:40.441 --rc genhtml_legend=1 00:05:40.441 --rc geninfo_all_blocks=1 00:05:40.441 --rc geninfo_unexecuted_blocks=1 00:05:40.441 00:05:40.441 ' 00:05:40.441 10:37:00 version -- app/version.sh@17 -- # get_header_version major 00:05:40.441 10:37:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:40.441 10:37:00 version -- app/version.sh@14 -- # tr -d '"' 00:05:40.441 10:37:00 version -- app/version.sh@14 -- # cut -f2 00:05:40.441 10:37:00 version -- app/version.sh@17 -- # major=25 00:05:40.441 10:37:00 version -- app/version.sh@18 -- # get_header_version minor 00:05:40.441 10:37:00 version -- app/version.sh@14 -- # cut -f2 00:05:40.441 10:37:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:40.441 10:37:00 version -- app/version.sh@14 -- # tr -d '"' 00:05:40.441 10:37:00 version -- app/version.sh@18 -- # minor=1 00:05:40.441 10:37:00 version -- app/version.sh@19 -- # get_header_version patch 00:05:40.441 10:37:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:40.441 10:37:00 version -- app/version.sh@14 -- # cut -f2 00:05:40.442 10:37:00 version -- app/version.sh@14 -- # tr -d '"' 00:05:40.442 10:37:00 version -- app/version.sh@19 -- # patch=0 00:05:40.442 10:37:00 version -- app/version.sh@20 -- # get_header_version suffix 00:05:40.442 10:37:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:40.442 10:37:00 version -- app/version.sh@14 -- # tr -d '"' 00:05:40.442 10:37:00 version -- app/version.sh@14 -- # cut -f2 00:05:40.442 10:37:00 version -- app/version.sh@20 -- # suffix=-pre 00:05:40.442 10:37:00 version -- app/version.sh@22 -- # version=25.1 00:05:40.442 10:37:00 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:40.442 10:37:00 version -- app/version.sh@28 -- # version=25.1rc0 00:05:40.442 10:37:00 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:40.442 10:37:00 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:40.442 10:37:00 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:40.442 10:37:00 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:40.442 00:05:40.442 real 0m0.203s 00:05:40.442 user 0m0.124s 00:05:40.442 sys 0m0.103s 00:05:40.442 ************************************ 00:05:40.442 END TEST version 00:05:40.442 ************************************ 00:05:40.442 10:37:00 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.442 10:37:00 version -- common/autotest_common.sh@10 -- # set +x 00:05:40.703 10:37:01 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:40.703 10:37:01 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:40.703 10:37:01 -- spdk/autotest.sh@194 -- # uname -s 00:05:40.703 10:37:01 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:40.703 10:37:01 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:40.703 10:37:01 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:40.703 10:37:01 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:05:40.703 10:37:01 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:40.703 10:37:01 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:40.703 10:37:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.703 10:37:01 -- common/autotest_common.sh@10 -- # set +x 00:05:40.703 ************************************ 00:05:40.703 START TEST blockdev_nvme 00:05:40.703 ************************************ 00:05:40.703 10:37:01 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:40.703 * Looking for test storage... 00:05:40.703 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:40.703 10:37:01 blockdev_nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:40.703 10:37:01 blockdev_nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:05:40.703 10:37:01 blockdev_nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:40.703 10:37:01 blockdev_nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:40.703 10:37:01 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.703 10:37:01 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.703 10:37:01 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.703 10:37:01 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.703 10:37:01 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.703 10:37:01 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.703 10:37:01 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.703 10:37:01 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.703 10:37:01 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.703 10:37:01 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.703 10:37:01 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.703 10:37:01 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:05:40.703 10:37:01 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:05:40.703 10:37:01 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.703 10:37:01 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.703 10:37:01 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:05:40.703 10:37:01 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:05:40.703 10:37:01 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.703 10:37:01 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:05:40.703 10:37:01 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.703 10:37:01 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:05:40.703 10:37:01 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:05:40.703 10:37:01 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.703 10:37:01 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:05:40.703 10:37:01 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.703 10:37:01 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.703 10:37:01 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.703 10:37:01 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:05:40.703 10:37:01 blockdev_nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.703 10:37:01 blockdev_nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:40.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.703 --rc genhtml_branch_coverage=1 00:05:40.703 --rc genhtml_function_coverage=1 00:05:40.703 --rc genhtml_legend=1 00:05:40.703 --rc geninfo_all_blocks=1 00:05:40.703 --rc geninfo_unexecuted_blocks=1 00:05:40.703 00:05:40.703 ' 00:05:40.703 10:37:01 blockdev_nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:40.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.703 --rc genhtml_branch_coverage=1 00:05:40.703 --rc genhtml_function_coverage=1 00:05:40.703 --rc genhtml_legend=1 00:05:40.703 --rc geninfo_all_blocks=1 00:05:40.703 --rc geninfo_unexecuted_blocks=1 00:05:40.703 00:05:40.703 ' 00:05:40.703 10:37:01 blockdev_nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:40.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.703 --rc genhtml_branch_coverage=1 00:05:40.703 --rc genhtml_function_coverage=1 00:05:40.703 --rc genhtml_legend=1 00:05:40.703 --rc geninfo_all_blocks=1 00:05:40.703 --rc geninfo_unexecuted_blocks=1 00:05:40.703 00:05:40.703 ' 00:05:40.703 10:37:01 blockdev_nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:40.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.703 --rc genhtml_branch_coverage=1 00:05:40.703 --rc genhtml_function_coverage=1 00:05:40.703 --rc genhtml_legend=1 00:05:40.703 --rc geninfo_all_blocks=1 00:05:40.703 --rc geninfo_unexecuted_blocks=1 00:05:40.703 00:05:40.703 ' 00:05:40.703 10:37:01 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:40.703 10:37:01 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:05:40.703 10:37:01 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:40.703 10:37:01 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:40.703 10:37:01 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:40.703 10:37:01 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:40.703 10:37:01 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:05:40.703 10:37:01 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:05:40.703 10:37:01 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:05:40.703 10:37:01 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:05:40.703 10:37:01 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:05:40.703 10:37:01 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:05:40.703 10:37:01 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:05:40.703 10:37:01 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:05:40.703 10:37:01 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:05:40.703 10:37:01 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:05:40.703 10:37:01 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:05:40.703 10:37:01 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:05:40.703 10:37:01 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:05:40.703 10:37:01 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:05:40.703 10:37:01 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:05:40.703 10:37:01 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:05:40.703 10:37:01 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:05:40.703 10:37:01 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:05:40.703 10:37:01 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73162 00:05:40.703 10:37:01 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:40.703 10:37:01 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:05:40.703 10:37:01 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 73162 00:05:40.703 10:37:01 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 73162 ']' 00:05:40.703 10:37:01 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.703 10:37:01 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:40.703 10:37:01 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.703 10:37:01 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:40.703 10:37:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:40.964 [2024-10-08 10:37:01.292654] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:05:40.964 [2024-10-08 10:37:01.292967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73162 ] 00:05:40.964 [2024-10-08 10:37:01.438124] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:40.964 [2024-10-08 10:37:01.457654] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.964 [2024-10-08 10:37:01.489978] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.536 10:37:02 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.536 10:37:02 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:05:41.536 10:37:02 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:05:41.536 10:37:02 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:05:41.536 10:37:02 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:05:41.536 10:37:02 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:05:41.536 10:37:02 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:41.797 10:37:02 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:05:41.797 10:37:02 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.797 10:37:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:42.058 10:37:02 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.058 10:37:02 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:05:42.058 10:37:02 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.058 10:37:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:42.058 10:37:02 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.058 10:37:02 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:05:42.058 10:37:02 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:05:42.058 10:37:02 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.058 10:37:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:42.058 10:37:02 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.058 10:37:02 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:05:42.058 10:37:02 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.058 10:37:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:42.058 10:37:02 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.058 10:37:02 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:05:42.058 10:37:02 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.058 10:37:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:42.058 10:37:02 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.058 10:37:02 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:05:42.058 10:37:02 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:05:42.058 10:37:02 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:05:42.058 10:37:02 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.058 10:37:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:42.058 10:37:02 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.058 10:37:02 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:05:42.058 10:37:02 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:05:42.058 10:37:02 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "d2cba02d-8cab-4aa4-9f9f-7a99304f8b98"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "d2cba02d-8cab-4aa4-9f9f-7a99304f8b98",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "5d82485f-764f-4009-a50a-c087ffbcfbd9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "5d82485f-764f-4009-a50a-c087ffbcfbd9",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "6d0c6a9d-5d4d-4d6e-b662-ad7ad3798f0f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6d0c6a9d-5d4d-4d6e-b662-ad7ad3798f0f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "fc783bc5-7ac1-4981-a78c-5af1977fe0e3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fc783bc5-7ac1-4981-a78c-5af1977fe0e3",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "9bf9c541-e7bf-4d38-b98f-e7dd5e19b332"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9bf9c541-e7bf-4d38-b98f-e7dd5e19b332",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "ce2548db-6cd2-49ca-83ed-8788541c6e2c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "ce2548db-6cd2-49ca-83ed-8788541c6e2c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:05:42.058 10:37:02 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:05:42.058 10:37:02 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:05:42.058 10:37:02 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:05:42.058 10:37:02 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 73162 00:05:42.058 10:37:02 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 73162 ']' 00:05:42.058 10:37:02 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 73162 00:05:42.058 10:37:02 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:05:42.059 10:37:02 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:42.059 10:37:02 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73162 00:05:42.059 10:37:02 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:42.059 10:37:02 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:42.059 killing process with pid 73162 00:05:42.059 10:37:02 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73162' 00:05:42.059 10:37:02 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 73162 00:05:42.059 10:37:02 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 73162 00:05:42.320 10:37:02 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:42.320 10:37:02 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:42.320 10:37:02 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:05:42.320 10:37:02 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.320 10:37:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:42.320 ************************************ 00:05:42.320 START TEST bdev_hello_world 00:05:42.320 ************************************ 00:05:42.320 10:37:02 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:42.320 [2024-10-08 10:37:02.871586] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:05:42.320 [2024-10-08 10:37:02.871697] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73235 ] 00:05:42.581 [2024-10-08 10:37:02.998922] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:42.581 [2024-10-08 10:37:03.020268] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.581 [2024-10-08 10:37:03.052407] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.154 [2024-10-08 10:37:03.420434] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:05:43.154 [2024-10-08 10:37:03.420478] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:05:43.154 [2024-10-08 10:37:03.420498] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:05:43.154 [2024-10-08 10:37:03.422581] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:05:43.154 [2024-10-08 10:37:03.423200] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:05:43.154 [2024-10-08 10:37:03.423223] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:05:43.154 [2024-10-08 10:37:03.423681] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:05:43.154 00:05:43.154 [2024-10-08 10:37:03.423709] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:05:43.154 00:05:43.154 real 0m0.769s 00:05:43.154 user 0m0.516s 00:05:43.154 sys 0m0.148s 00:05:43.154 ************************************ 00:05:43.154 END TEST bdev_hello_world 00:05:43.154 ************************************ 00:05:43.154 10:37:03 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.154 10:37:03 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:05:43.154 10:37:03 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:05:43.154 10:37:03 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:43.154 10:37:03 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.154 10:37:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:43.154 ************************************ 00:05:43.154 START TEST bdev_bounds 00:05:43.154 ************************************ 00:05:43.154 10:37:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:05:43.154 10:37:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73266 00:05:43.154 10:37:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.154 Process bdevio pid: 73266 00:05:43.154 10:37:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73266' 00:05:43.154 10:37:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73266 00:05:43.154 10:37:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 73266 ']' 00:05:43.154 10:37:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.154 10:37:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:43.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.154 10:37:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.154 10:37:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:43.154 10:37:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:43.154 10:37:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:43.154 [2024-10-08 10:37:03.691899] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:05:43.154 [2024-10-08 10:37:03.692014] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73266 ] 00:05:43.414 [2024-10-08 10:37:03.820084] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:43.414 [2024-10-08 10:37:03.841224] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:43.414 [2024-10-08 10:37:03.875299] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.414 [2024-10-08 10:37:03.875765] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:43.414 [2024-10-08 10:37:03.875820] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.980 10:37:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:43.980 10:37:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:05:43.980 10:37:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:05:44.239 I/O targets: 00:05:44.239 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:05:44.239 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:05:44.239 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:44.239 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:44.239 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:44.239 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:05:44.239 00:05:44.239 00:05:44.239 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.239 http://cunit.sourceforge.net/ 00:05:44.239 00:05:44.239 00:05:44.239 Suite: bdevio tests on: Nvme3n1 00:05:44.239 Test: blockdev write read block ...passed 00:05:44.239 Test: blockdev write zeroes read block ...passed 00:05:44.239 Test: blockdev write zeroes read no split ...passed 00:05:44.239 Test: blockdev write zeroes read split ...passed 00:05:44.239 Test: blockdev write zeroes read split partial ...passed 00:05:44.239 Test: blockdev reset ...[2024-10-08 10:37:04.639663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:05:44.239 passed 00:05:44.239 Test: blockdev write read 8 blocks ...[2024-10-08 10:37:04.641325] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:05:44.239 passed 00:05:44.239 Test: blockdev write read size > 128k ...passed 00:05:44.239 Test: blockdev write read invalid size ...passed 00:05:44.239 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:44.239 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:44.239 Test: blockdev write read max offset ...passed 00:05:44.239 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:44.239 Test: blockdev writev readv 8 blocks ...passed 00:05:44.239 Test: blockdev writev readv 30 x 1block ...passed 00:05:44.239 Test: blockdev writev readv block ...passed 00:05:44.239 Test: blockdev writev readv size > 128k ...passed 00:05:44.239 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:44.239 Test: blockdev comparev and writev ...[2024-10-08 10:37:04.645818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ab406000 len:0x1000 00:05:44.239 [2024-10-08 10:37:04.645853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:44.239 passed 00:05:44.239 Test: blockdev nvme passthru rw ...passed 00:05:44.239 Test: blockdev nvme passthru vendor specific ...passed 00:05:44.239 Test: blockdev nvme admin passthru ...[2024-10-08 10:37:04.646269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:44.239 [2024-10-08 10:37:04.646289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:44.239 passed 00:05:44.239 Test: blockdev copy ...passed 00:05:44.239 Suite: bdevio tests on: Nvme2n3 00:05:44.239 Test: blockdev write read block ...passed 00:05:44.239 Test: blockdev write zeroes read block ...passed 00:05:44.239 Test: blockdev write zeroes read no split ...passed 00:05:44.239 Test: blockdev write zeroes read split ...passed 00:05:44.239 Test: blockdev write zeroes read split partial ...passed 00:05:44.239 Test: blockdev reset ...[2024-10-08 10:37:04.660347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:05:44.239 passed 00:05:44.239 Test: blockdev write read 8 blocks ...[2024-10-08 10:37:04.662010] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:05:44.239 passed 00:05:44.239 Test: blockdev write read size > 128k ...passed 00:05:44.239 Test: blockdev write read invalid size ...passed 00:05:44.239 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:44.239 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:44.239 Test: blockdev write read max offset ...passed 00:05:44.239 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:44.240 Test: blockdev writev readv 8 blocks ...passed 00:05:44.240 Test: blockdev writev readv 30 x 1block ...passed 00:05:44.240 Test: blockdev writev readv block ...passed 00:05:44.240 Test: blockdev writev readv size > 128k ...passed 00:05:44.240 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:44.240 Test: blockdev comparev and writev ...[2024-10-08 10:37:04.665544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2db805000 len:0x1000 00:05:44.240 [2024-10-08 10:37:04.665572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:44.240 passed 00:05:44.240 Test: blockdev nvme passthru rw ...passed 00:05:44.240 Test: blockdev nvme passthru vendor specific ...passed 00:05:44.240 Test: blockdev nvme admin passthru ...[2024-10-08 10:37:04.665973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:44.240 [2024-10-08 10:37:04.665991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:44.240 passed 00:05:44.240 Test: blockdev copy ...passed 00:05:44.240 Suite: bdevio tests on: Nvme2n2 00:05:44.240 Test: blockdev write read block ...passed 00:05:44.240 Test: blockdev write zeroes read block ...passed 00:05:44.240 Test: blockdev write zeroes read no split ...passed 00:05:44.240 Test: blockdev write zeroes read split ...passed 00:05:44.240 Test: blockdev write zeroes read split partial ...passed 00:05:44.240 Test: blockdev reset ...[2024-10-08 10:37:04.680971] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:05:44.240 [2024-10-08 10:37:04.682587] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:05:44.240 passed 00:05:44.240 Test: blockdev write read 8 blocks ...passed 00:05:44.240 Test: blockdev write read size > 128k ...passed 00:05:44.240 Test: blockdev write read invalid size ...passed 00:05:44.240 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:44.240 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:44.240 Test: blockdev write read max offset ...passed 00:05:44.240 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:44.240 Test: blockdev writev readv 8 blocks ...passed 00:05:44.240 Test: blockdev writev readv 30 x 1block ...passed 00:05:44.240 Test: blockdev writev readv block ...passed 00:05:44.240 Test: blockdev writev readv size > 128k ...passed 00:05:44.240 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:44.240 Test: blockdev comparev and writev ...[2024-10-08 10:37:04.686551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dbc36000 len:0x1000 00:05:44.240 passed 00:05:44.240 Test: blockdev nvme passthru rw ...[2024-10-08 10:37:04.686579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:44.240 passed 00:05:44.240 Test: blockdev nvme passthru vendor specific ...[2024-10-08 10:37:04.686967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:44.240 passed 00:05:44.240 Test: blockdev nvme admin passthru ...[2024-10-08 10:37:04.686986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:44.240 passed 00:05:44.240 Test: blockdev copy ...passed 00:05:44.240 Suite: bdevio tests on: Nvme2n1 00:05:44.240 Test: blockdev write read block ...passed 00:05:44.240 Test: blockdev write zeroes read block ...passed 00:05:44.240 Test: blockdev write zeroes read no split ...passed 00:05:44.240 Test: blockdev write zeroes read split ...passed 00:05:44.240 Test: blockdev write zeroes read split partial ...passed 00:05:44.240 Test: blockdev reset ...[2024-10-08 10:37:04.701881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:05:44.240 [2024-10-08 10:37:04.703402] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:05:44.240 passed 00:05:44.240 Test: blockdev write read 8 blocks ...passed 00:05:44.240 Test: blockdev write read size > 128k ...passed 00:05:44.240 Test: blockdev write read invalid size ...passed 00:05:44.240 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:44.240 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:44.240 Test: blockdev write read max offset ...passed 00:05:44.240 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:44.240 Test: blockdev writev readv 8 blocks ...passed 00:05:44.240 Test: blockdev writev readv 30 x 1block ...passed 00:05:44.240 Test: blockdev writev readv block ...passed 00:05:44.240 Test: blockdev writev readv size > 128k ...passed 00:05:44.240 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:44.240 Test: blockdev comparev and writev ...[2024-10-08 10:37:04.707306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dbc30000 len:0x1000 00:05:44.240 passed 00:05:44.240 Test: blockdev nvme passthru rw ...[2024-10-08 10:37:04.707338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:44.240 passed 00:05:44.240 Test: blockdev nvme passthru vendor specific ...[2024-10-08 10:37:04.707691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:44.240 passed 00:05:44.240 Test: blockdev nvme admin passthru ...[2024-10-08 10:37:04.707708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:44.240 passed 00:05:44.240 Test: blockdev copy ...passed 00:05:44.240 Suite: bdevio tests on: Nvme1n1 00:05:44.240 Test: blockdev write read block ...passed 00:05:44.240 Test: blockdev write zeroes read block ...passed 00:05:44.240 Test: blockdev write zeroes read no split ...passed 00:05:44.240 Test: blockdev write zeroes read split ...passed 00:05:44.240 Test: blockdev write zeroes read split partial ...passed 00:05:44.240 Test: blockdev reset ...[2024-10-08 10:37:04.722618] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:05:44.240 [2024-10-08 10:37:04.723950] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:05:44.240 passed 00:05:44.240 Test: blockdev write read 8 blocks ...passed 00:05:44.240 Test: blockdev write read size > 128k ...passed 00:05:44.240 Test: blockdev write read invalid size ...passed 00:05:44.240 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:44.240 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:44.240 Test: blockdev write read max offset ...passed 00:05:44.240 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:44.240 Test: blockdev writev readv 8 blocks ...passed 00:05:44.240 Test: blockdev writev readv 30 x 1block ...passed 00:05:44.240 Test: blockdev writev readv block ...passed 00:05:44.240 Test: blockdev writev readv size > 128k ...passed 00:05:44.240 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:44.240 Test: blockdev comparev and writev ...[2024-10-08 10:37:04.727355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dbc2c000 len:0x1000 00:05:44.240 [2024-10-08 10:37:04.727382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:44.240 passed 00:05:44.240 Test: blockdev nvme passthru rw ...passed 00:05:44.240 Test: blockdev nvme passthru vendor specific ...[2024-10-08 10:37:04.728036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:44.240 [2024-10-08 10:37:04.728056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:44.240 passed 00:05:44.240 Test: blockdev nvme admin passthru ...passed 00:05:44.240 Test: blockdev copy ...passed 00:05:44.240 Suite: bdevio tests on: Nvme0n1 00:05:44.240 Test: blockdev write read block ...passed 00:05:44.240 Test: blockdev write zeroes read block ...passed 00:05:44.240 Test: blockdev write zeroes read no split ...passed 00:05:44.240 Test: blockdev write zeroes read split ...passed 00:05:44.240 Test: blockdev write zeroes read split partial ...passed 00:05:44.240 Test: blockdev reset ...[2024-10-08 10:37:04.743309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:05:44.240 [2024-10-08 10:37:04.744630] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:05:44.240 passed 00:05:44.240 Test: blockdev write read 8 blocks ...passed 00:05:44.240 Test: blockdev write read size > 128k ...passed 00:05:44.240 Test: blockdev write read invalid size ...passed 00:05:44.240 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:44.240 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:44.240 Test: blockdev write read max offset ...passed 00:05:44.240 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:44.240 Test: blockdev writev readv 8 blocks ...passed 00:05:44.240 Test: blockdev writev readv 30 x 1block ...passed 00:05:44.240 Test: blockdev writev readv block ...passed 00:05:44.240 Test: blockdev writev readv size > 128k ...passed 00:05:44.240 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:44.240 Test: blockdev comparev and writev ...[2024-10-08 10:37:04.747831] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:05:44.240 separate metadata which is not supported yet. 00:05:44.240 passed 00:05:44.240 Test: blockdev nvme passthru rw ...passed 00:05:44.240 Test: blockdev nvme passthru vendor specific ...[2024-10-08 10:37:04.748310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:05:44.240 [2024-10-08 10:37:04.748335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:05:44.240 passed 00:05:44.240 Test: blockdev nvme admin passthru ...passed 00:05:44.240 Test: blockdev copy ...passed 00:05:44.240 00:05:44.240 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.240 suites 6 6 n/a 0 0 00:05:44.240 tests 138 138 138 0 0 00:05:44.240 asserts 893 893 893 0 n/a 00:05:44.240 00:05:44.240 Elapsed time = 0.297 seconds 00:05:44.240 0 00:05:44.240 10:37:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73266 00:05:44.240 10:37:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 73266 ']' 00:05:44.240 10:37:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 73266 00:05:44.240 10:37:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:05:44.240 10:37:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:44.240 10:37:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73266 00:05:44.240 10:37:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:44.240 10:37:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:44.240 10:37:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73266' 00:05:44.240 killing process with pid 73266 00:05:44.240 10:37:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 73266 00:05:44.240 10:37:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 73266 00:05:44.499 10:37:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:05:44.499 00:05:44.499 real 0m1.303s 00:05:44.499 user 0m3.305s 00:05:44.499 sys 0m0.262s 00:05:44.499 10:37:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.499 ************************************ 00:05:44.499 END TEST bdev_bounds 00:05:44.499 ************************************ 00:05:44.499 10:37:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:44.499 10:37:04 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:44.499 10:37:04 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:44.499 10:37:04 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.499 10:37:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:44.499 ************************************ 00:05:44.499 START TEST bdev_nbd 00:05:44.499 ************************************ 00:05:44.499 10:37:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:44.499 10:37:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:05:44.499 10:37:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:05:44.499 10:37:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.499 10:37:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:44.499 10:37:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:44.499 10:37:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:05:44.499 10:37:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:05:44.499 10:37:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:05:44.499 10:37:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:05:44.499 10:37:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:05:44.499 10:37:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:05:44.500 10:37:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:44.500 10:37:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:05:44.500 10:37:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:44.500 10:37:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:05:44.500 10:37:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73309 00:05:44.500 10:37:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:05:44.500 10:37:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:44.500 10:37:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73309 /var/tmp/spdk-nbd.sock 00:05:44.500 10:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 73309 ']' 00:05:44.500 10:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:44.500 10:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:44.500 10:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:44.500 10:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.500 10:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:05:44.500 [2024-10-08 10:37:05.065021] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:05:44.500 [2024-10-08 10:37:05.065127] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:44.760 [2024-10-08 10:37:05.193207] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:44.760 [2024-10-08 10:37:05.213509] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.760 [2024-10-08 10:37:05.245744] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.332 10:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.332 10:37:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:05:45.332 10:37:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:05:45.332 10:37:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.332 10:37:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:45.332 10:37:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:05:45.332 10:37:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:05:45.332 10:37:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.332 10:37:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:45.332 10:37:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:05:45.332 10:37:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:05:45.332 10:37:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:05:45.332 10:37:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:05:45.332 10:37:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:45.332 10:37:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:05:45.592 10:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:05:45.592 10:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:05:45.593 10:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:05:45.593 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:45.593 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:45.593 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:45.593 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:45.593 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:45.593 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:45.593 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:45.593 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:45.593 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:45.593 1+0 records in 00:05:45.593 1+0 records out 00:05:45.593 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000892697 s, 4.6 MB/s 00:05:45.593 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:45.593 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:45.593 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:45.593 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:45.593 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:45.593 10:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:45.593 10:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:45.593 10:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:05:45.851 10:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:05:45.851 10:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:05:45.851 10:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:05:45.851 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:45.851 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:45.851 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:45.851 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:45.851 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:45.851 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:45.851 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:45.851 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:45.851 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:45.851 1+0 records in 00:05:45.851 1+0 records out 00:05:45.851 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000797371 s, 5.1 MB/s 00:05:45.851 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:45.851 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:45.851 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:45.851 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:45.851 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:45.851 10:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:45.851 10:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:45.851 10:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:05:46.112 10:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:05:46.112 10:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:05:46.112 10:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:05:46.112 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:05:46.112 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:46.112 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:46.112 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:46.112 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:05:46.112 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:46.112 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:46.112 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:46.112 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:46.112 1+0 records in 00:05:46.112 1+0 records out 00:05:46.112 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000739869 s, 5.5 MB/s 00:05:46.112 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:46.112 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:46.112 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:46.112 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:46.112 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:46.112 10:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:46.112 10:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:46.112 10:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:05:46.373 10:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:05:46.373 10:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:05:46.373 10:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:05:46.373 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:05:46.373 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:46.373 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:46.373 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:46.373 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:05:46.373 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:46.373 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:46.374 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:46.374 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:46.374 1+0 records in 00:05:46.374 1+0 records out 00:05:46.374 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000720966 s, 5.7 MB/s 00:05:46.374 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:46.374 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:46.374 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:46.374 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:46.374 10:37:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:46.374 10:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:46.374 10:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:46.374 10:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:05:46.635 10:37:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:05:46.635 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:05:46.635 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:05:46.635 10:37:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:05:46.635 10:37:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:46.635 10:37:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:46.635 10:37:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:46.635 10:37:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:05:46.635 10:37:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:46.635 10:37:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:46.635 10:37:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:46.635 10:37:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:46.635 1+0 records in 00:05:46.635 1+0 records out 00:05:46.635 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102993 s, 4.0 MB/s 00:05:46.635 10:37:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:46.635 10:37:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:46.635 10:37:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:46.635 10:37:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:46.635 10:37:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:46.635 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:46.635 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:46.635 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:05:46.897 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:05:46.897 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:05:46.897 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:05:46.897 10:37:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:05:46.897 10:37:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:46.897 10:37:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:46.897 10:37:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:46.897 10:37:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:05:46.897 10:37:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:46.897 10:37:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:46.897 10:37:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:46.897 10:37:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:46.897 1+0 records in 00:05:46.897 1+0 records out 00:05:46.897 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000990064 s, 4.1 MB/s 00:05:46.897 10:37:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:46.897 10:37:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:46.897 10:37:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:46.897 10:37:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:46.897 10:37:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:46.897 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:46.897 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:46.897 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.897 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:05:46.897 { 00:05:46.897 "nbd_device": "/dev/nbd0", 00:05:46.897 "bdev_name": "Nvme0n1" 00:05:46.897 }, 00:05:46.897 { 00:05:46.897 "nbd_device": "/dev/nbd1", 00:05:46.897 "bdev_name": "Nvme1n1" 00:05:46.897 }, 00:05:46.897 { 00:05:46.897 "nbd_device": "/dev/nbd2", 00:05:46.897 "bdev_name": "Nvme2n1" 00:05:46.897 }, 00:05:46.897 { 00:05:46.897 "nbd_device": "/dev/nbd3", 00:05:46.897 "bdev_name": "Nvme2n2" 00:05:46.897 }, 00:05:46.897 { 00:05:46.897 "nbd_device": "/dev/nbd4", 00:05:46.897 "bdev_name": "Nvme2n3" 00:05:46.897 }, 00:05:46.897 { 00:05:46.897 "nbd_device": "/dev/nbd5", 00:05:46.897 "bdev_name": "Nvme3n1" 00:05:46.897 } 00:05:46.897 ]' 00:05:46.897 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:05:46.897 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:05:46.897 { 00:05:46.897 "nbd_device": "/dev/nbd0", 00:05:46.897 "bdev_name": "Nvme0n1" 00:05:46.897 }, 00:05:46.897 { 00:05:46.897 "nbd_device": "/dev/nbd1", 00:05:46.897 "bdev_name": "Nvme1n1" 00:05:46.897 }, 00:05:46.897 { 00:05:46.897 "nbd_device": "/dev/nbd2", 00:05:46.897 "bdev_name": "Nvme2n1" 00:05:46.897 }, 00:05:46.897 { 00:05:46.897 "nbd_device": "/dev/nbd3", 00:05:46.897 "bdev_name": "Nvme2n2" 00:05:46.897 }, 00:05:46.897 { 00:05:46.897 "nbd_device": "/dev/nbd4", 00:05:46.897 "bdev_name": "Nvme2n3" 00:05:46.897 }, 00:05:46.897 { 00:05:46.897 "nbd_device": "/dev/nbd5", 00:05:46.897 "bdev_name": "Nvme3n1" 00:05:46.897 } 00:05:46.897 ]' 00:05:46.897 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:05:47.158 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:05:47.158 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.158 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:05:47.158 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:47.158 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:47.158 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.158 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:47.158 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:47.158 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:47.158 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:47.158 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.158 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.158 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:47.158 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:47.158 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.158 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.158 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:47.418 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:47.418 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:47.418 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:47.418 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.418 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.418 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:47.418 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:47.418 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.418 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.418 10:37:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:05:47.679 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:05:47.679 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:05:47.679 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:05:47.679 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.679 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.679 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:05:47.679 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:47.679 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.679 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.679 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:05:47.940 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:05:47.940 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:05:47.940 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:05:47.940 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.940 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.940 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:05:47.940 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:47.940 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.940 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.940 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:05:47.940 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:05:47.940 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:05:47.940 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:05:47.940 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.940 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.940 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:05:47.940 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:47.940 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.940 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.940 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:05:48.201 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:05:48.201 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:05:48.201 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:05:48.201 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:48.201 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:48.201 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:05:48.201 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:48.201 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:48.201 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:48.201 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.201 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:48.462 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:48.462 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:48.462 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:48.462 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:48.462 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:05:48.462 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:48.462 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:05:48.462 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:05:48.462 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:05:48.462 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:05:48.462 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:05:48.462 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:05:48.462 10:37:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:48.462 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.462 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:48.462 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:48.462 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:48.462 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:48.462 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:48.462 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.462 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:48.462 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:48.462 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:48.462 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:48.462 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:05:48.462 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:48.462 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:48.462 10:37:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:05:48.723 /dev/nbd0 00:05:48.723 10:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:48.723 10:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:48.723 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:48.723 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:48.723 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:48.723 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:48.723 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:48.723 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:48.723 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:48.723 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:48.723 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:48.723 1+0 records in 00:05:48.723 1+0 records out 00:05:48.723 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000462759 s, 8.9 MB/s 00:05:48.723 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:48.723 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:48.723 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:48.723 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:48.723 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:48.723 10:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.723 10:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:48.723 10:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:05:48.999 /dev/nbd1 00:05:48.999 10:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:48.999 10:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:48.999 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:48.999 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:48.999 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:48.999 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:48.999 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:48.999 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:48.999 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:48.999 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:48.999 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:48.999 1+0 records in 00:05:48.999 1+0 records out 00:05:48.999 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299229 s, 13.7 MB/s 00:05:48.999 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:48.999 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:48.999 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:48.999 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:48.999 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:48.999 10:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.999 10:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:48.999 10:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:05:49.271 /dev/nbd10 00:05:49.271 10:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:05:49.271 10:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:05:49.271 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:05:49.271 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:49.271 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:49.271 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:49.271 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:05:49.271 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:49.271 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:49.271 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:49.271 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:49.271 1+0 records in 00:05:49.271 1+0 records out 00:05:49.271 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000471903 s, 8.7 MB/s 00:05:49.271 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:49.271 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:49.271 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:49.271 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:49.271 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:49.271 10:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.271 10:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:49.271 10:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:05:49.271 /dev/nbd11 00:05:49.271 10:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:05:49.271 10:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:05:49.271 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:05:49.271 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:49.271 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:49.271 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:49.271 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:05:49.271 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:49.271 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:49.271 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:49.271 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:49.271 1+0 records in 00:05:49.271 1+0 records out 00:05:49.271 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412061 s, 9.9 MB/s 00:05:49.271 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:49.271 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:49.271 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:49.271 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:49.271 10:37:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:49.271 10:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.271 10:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:49.271 10:37:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:05:49.530 /dev/nbd12 00:05:49.530 10:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:05:49.530 10:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:05:49.530 10:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:05:49.530 10:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:49.530 10:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:49.530 10:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:49.530 10:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:05:49.530 10:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:49.530 10:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:49.530 10:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:49.530 10:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:49.530 1+0 records in 00:05:49.530 1+0 records out 00:05:49.530 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000497793 s, 8.2 MB/s 00:05:49.530 10:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:49.530 10:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:49.530 10:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:49.530 10:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:49.530 10:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:49.530 10:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.530 10:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:49.530 10:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:05:49.789 /dev/nbd13 00:05:49.789 10:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:05:49.789 10:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:05:49.789 10:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:05:49.789 10:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:05:49.789 10:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:49.789 10:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:49.789 10:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:05:49.789 10:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:05:49.789 10:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:49.789 10:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:49.789 10:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:49.789 1+0 records in 00:05:49.789 1+0 records out 00:05:49.789 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000603832 s, 6.8 MB/s 00:05:49.790 10:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:49.790 10:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:05:49.790 10:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:49.790 10:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:49.790 10:37:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:05:49.790 10:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.790 10:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:49.790 10:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.790 10:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.790 10:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.048 10:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:50.048 { 00:05:50.048 "nbd_device": "/dev/nbd0", 00:05:50.048 "bdev_name": "Nvme0n1" 00:05:50.048 }, 00:05:50.048 { 00:05:50.048 "nbd_device": "/dev/nbd1", 00:05:50.048 "bdev_name": "Nvme1n1" 00:05:50.048 }, 00:05:50.048 { 00:05:50.048 "nbd_device": "/dev/nbd10", 00:05:50.048 "bdev_name": "Nvme2n1" 00:05:50.048 }, 00:05:50.048 { 00:05:50.048 "nbd_device": "/dev/nbd11", 00:05:50.048 "bdev_name": "Nvme2n2" 00:05:50.048 }, 00:05:50.048 { 00:05:50.048 "nbd_device": "/dev/nbd12", 00:05:50.048 "bdev_name": "Nvme2n3" 00:05:50.048 }, 00:05:50.048 { 00:05:50.048 "nbd_device": "/dev/nbd13", 00:05:50.048 "bdev_name": "Nvme3n1" 00:05:50.048 } 00:05:50.048 ]' 00:05:50.048 10:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.048 10:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:50.048 { 00:05:50.048 "nbd_device": "/dev/nbd0", 00:05:50.048 "bdev_name": "Nvme0n1" 00:05:50.048 }, 00:05:50.048 { 00:05:50.048 "nbd_device": "/dev/nbd1", 00:05:50.048 "bdev_name": "Nvme1n1" 00:05:50.048 }, 00:05:50.048 { 00:05:50.048 "nbd_device": "/dev/nbd10", 00:05:50.048 "bdev_name": "Nvme2n1" 00:05:50.048 }, 00:05:50.048 { 00:05:50.048 "nbd_device": "/dev/nbd11", 00:05:50.048 "bdev_name": "Nvme2n2" 00:05:50.048 }, 00:05:50.048 { 00:05:50.048 "nbd_device": "/dev/nbd12", 00:05:50.048 "bdev_name": "Nvme2n3" 00:05:50.048 }, 00:05:50.048 { 00:05:50.048 "nbd_device": "/dev/nbd13", 00:05:50.048 "bdev_name": "Nvme3n1" 00:05:50.048 } 00:05:50.048 ]' 00:05:50.048 10:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:50.048 /dev/nbd1 00:05:50.048 /dev/nbd10 00:05:50.048 /dev/nbd11 00:05:50.048 /dev/nbd12 00:05:50.048 /dev/nbd13' 00:05:50.048 10:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:50.048 /dev/nbd1 00:05:50.048 /dev/nbd10 00:05:50.048 /dev/nbd11 00:05:50.048 /dev/nbd12 00:05:50.048 /dev/nbd13' 00:05:50.048 10:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.048 10:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:05:50.048 10:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:05:50.048 10:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:05:50.048 10:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:05:50.048 10:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:05:50.048 10:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:50.048 10:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.048 10:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:50.048 10:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:50.048 10:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:50.048 10:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:05:50.048 256+0 records in 00:05:50.048 256+0 records out 00:05:50.048 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00818174 s, 128 MB/s 00:05:50.048 10:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.048 10:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:50.048 256+0 records in 00:05:50.048 256+0 records out 00:05:50.048 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0501758 s, 20.9 MB/s 00:05:50.048 10:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.048 10:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:50.308 256+0 records in 00:05:50.308 256+0 records out 00:05:50.308 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0519866 s, 20.2 MB/s 00:05:50.308 10:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.308 10:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:05:50.308 256+0 records in 00:05:50.308 256+0 records out 00:05:50.309 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139764 s, 7.5 MB/s 00:05:50.309 10:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.309 10:37:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:05:50.570 256+0 records in 00:05:50.570 256+0 records out 00:05:50.570 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.220308 s, 4.8 MB/s 00:05:50.570 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.570 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:05:50.570 256+0 records in 00:05:50.570 256+0 records out 00:05:50.570 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13049 s, 8.0 MB/s 00:05:50.570 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:50.570 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:05:50.829 256+0 records in 00:05:50.829 256+0 records out 00:05:50.829 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.216821 s, 4.8 MB/s 00:05:50.829 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:05:50.829 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:50.829 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:50.829 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:50.829 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:50.829 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:50.829 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:50.829 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.829 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:05:50.829 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.829 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:05:50.829 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.829 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:05:50.829 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.829 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:05:50.829 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.829 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:05:50.829 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:50.829 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:05:50.829 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:05:50.829 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:50.829 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.829 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:50.829 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:50.829 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:50.829 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.829 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:51.090 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:51.090 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:51.090 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:51.090 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.090 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.090 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:51.090 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:51.090 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.090 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.090 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:51.351 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:51.351 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:51.351 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:51.351 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.351 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.351 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:51.351 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:51.351 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.351 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.351 10:37:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:05:51.609 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:05:51.609 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:05:51.609 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:05:51.609 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.609 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.609 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:05:51.609 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:51.609 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.609 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.609 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:05:51.867 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:05:51.867 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:05:51.867 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:05:51.867 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.867 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.867 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:05:51.867 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:51.867 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.867 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.867 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:05:51.867 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:05:51.867 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:05:51.867 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:05:51.867 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.867 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.867 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:05:51.867 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:51.867 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.867 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.867 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:05:52.126 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:05:52.126 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:05:52.126 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:05:52.126 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.126 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.126 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:05:52.126 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:52.126 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.126 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.126 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.126 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.386 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:52.386 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:52.386 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.386 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:52.386 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:05:52.386 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.386 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:05:52.386 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:05:52.386 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:05:52.386 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:05:52.386 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:52.386 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:05:52.386 10:37:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:05:52.386 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.386 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:05:52.386 10:37:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:05:52.645 malloc_lvol_verify 00:05:52.645 10:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:05:52.902 35d99ab5-b335-4c27-8d3d-01af70fa8210 00:05:52.902 10:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:05:52.902 6eed203b-5fe6-42b8-a375-91d9edc8237e 00:05:52.902 10:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:05:53.161 /dev/nbd0 00:05:53.161 10:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:05:53.161 10:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:05:53.161 10:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:05:53.161 10:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:05:53.161 10:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:05:53.161 mke2fs 1.47.0 (5-Feb-2023) 00:05:53.161 Discarding device blocks: 0/4096 done 00:05:53.161 Creating filesystem with 4096 1k blocks and 1024 inodes 00:05:53.161 00:05:53.161 Allocating group tables: 0/1 done 00:05:53.161 Writing inode tables: 0/1 done 00:05:53.161 Creating journal (1024 blocks): done 00:05:53.161 Writing superblocks and filesystem accounting information: 0/1 done 00:05:53.161 00:05:53.161 10:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:05:53.161 10:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.161 10:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:05:53.161 10:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:53.161 10:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:53.161 10:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.161 10:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:53.420 10:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:53.420 10:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:53.420 10:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:53.420 10:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.420 10:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.420 10:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:53.420 10:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:53.420 10:37:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.420 10:37:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73309 00:05:53.420 10:37:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 73309 ']' 00:05:53.420 10:37:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 73309 00:05:53.420 10:37:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:05:53.420 10:37:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:53.420 10:37:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73309 00:05:53.420 10:37:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:53.420 10:37:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:53.420 killing process with pid 73309 00:05:53.420 10:37:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73309' 00:05:53.420 10:37:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 73309 00:05:53.420 10:37:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 73309 00:05:53.681 10:37:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:05:53.681 00:05:53.681 real 0m9.062s 00:05:53.681 user 0m13.007s 00:05:53.681 sys 0m3.008s 00:05:53.681 10:37:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.681 10:37:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:05:53.681 ************************************ 00:05:53.681 END TEST bdev_nbd 00:05:53.681 ************************************ 00:05:53.681 10:37:14 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:05:53.681 10:37:14 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:05:53.681 skipping fio tests on NVMe due to multi-ns failures. 00:05:53.681 10:37:14 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:05:53.681 10:37:14 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:53.681 10:37:14 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:05:53.681 10:37:14 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:05:53.681 10:37:14 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.681 10:37:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:53.681 ************************************ 00:05:53.681 START TEST bdev_verify 00:05:53.681 ************************************ 00:05:53.681 10:37:14 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:05:53.681 [2024-10-08 10:37:14.169897] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:05:53.681 [2024-10-08 10:37:14.170006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73676 ] 00:05:53.941 [2024-10-08 10:37:14.298483] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:53.941 [2024-10-08 10:37:14.317985] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:53.941 [2024-10-08 10:37:14.347320] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.941 [2024-10-08 10:37:14.347392] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.199 Running I/O for 5 seconds... 00:05:56.510 25920.00 IOPS, 101.25 MiB/s [2024-10-08T10:37:18.030Z] 24448.00 IOPS, 95.50 MiB/s [2024-10-08T10:37:19.032Z] 22954.67 IOPS, 89.67 MiB/s [2024-10-08T10:37:19.970Z] 23088.00 IOPS, 90.19 MiB/s [2024-10-08T10:37:19.970Z] 22425.60 IOPS, 87.60 MiB/s 00:05:59.393 Latency(us) 00:05:59.393 [2024-10-08T10:37:19.970Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:59.393 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:59.393 Verification LBA range: start 0x0 length 0xbd0bd 00:05:59.394 Nvme0n1 : 5.05 1887.00 7.37 0.00 0.00 67587.21 6704.84 66544.25 00:05:59.394 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:59.394 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:05:59.394 Nvme0n1 : 5.05 1825.80 7.13 0.00 0.00 69942.10 9981.64 66947.54 00:05:59.394 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:59.394 Verification LBA range: start 0x0 length 0xa0000 00:05:59.394 Nvme1n1 : 5.07 1894.10 7.40 0.00 0.00 67361.05 11191.53 66140.95 00:05:59.394 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:59.394 Verification LBA range: start 0xa0000 length 0xa0000 00:05:59.394 Nvme1n1 : 5.05 1824.96 7.13 0.00 0.00 69830.96 11393.18 64527.75 00:05:59.394 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:59.394 Verification LBA range: start 0x0 length 0x80000 00:05:59.394 Nvme2n1 : 5.07 1893.61 7.40 0.00 0.00 67267.09 9880.81 63721.16 00:05:59.394 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:59.394 Verification LBA range: start 0x80000 length 0x80000 00:05:59.394 Nvme2n1 : 5.05 1824.57 7.13 0.00 0.00 69715.38 11846.89 63721.16 00:05:59.394 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:59.394 Verification LBA range: start 0x0 length 0x80000 00:05:59.394 Nvme2n2 : 5.07 1893.12 7.40 0.00 0.00 67187.31 10233.70 62914.56 00:05:59.394 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:59.394 Verification LBA range: start 0x80000 length 0x80000 00:05:59.394 Nvme2n2 : 5.05 1824.17 7.13 0.00 0.00 69592.78 11090.71 61704.66 00:05:59.394 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:59.394 Verification LBA range: start 0x0 length 0x80000 00:05:59.394 Nvme2n3 : 5.07 1892.04 7.39 0.00 0.00 67106.89 11695.66 64527.75 00:05:59.394 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:59.394 Verification LBA range: start 0x80000 length 0x80000 00:05:59.394 Nvme2n3 : 5.05 1823.77 7.12 0.00 0.00 69470.44 10687.41 65334.35 00:05:59.394 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:59.394 Verification LBA range: start 0x0 length 0x20000 00:05:59.394 Nvme3n1 : 5.08 1890.96 7.39 0.00 0.00 67036.51 9074.22 68157.44 00:05:59.394 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:59.394 Verification LBA range: start 0x20000 length 0x20000 00:05:59.394 Nvme3n1 : 5.07 1844.78 7.21 0.00 0.00 68664.99 2848.30 68157.44 00:05:59.394 [2024-10-08T10:37:19.971Z] =================================================================================================================== 00:05:59.394 [2024-10-08T10:37:19.971Z] Total : 22318.88 87.18 0.00 0.00 68374.42 2848.30 68157.44 00:06:00.330 00:06:00.330 real 0m6.554s 00:06:00.330 user 0m12.419s 00:06:00.330 sys 0m0.193s 00:06:00.330 ************************************ 00:06:00.330 10:37:20 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.330 10:37:20 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:00.330 END TEST bdev_verify 00:06:00.330 ************************************ 00:06:00.330 10:37:20 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:00.330 10:37:20 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:06:00.330 10:37:20 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.330 10:37:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:00.330 ************************************ 00:06:00.330 START TEST bdev_verify_big_io 00:06:00.330 ************************************ 00:06:00.330 10:37:20 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:00.330 [2024-10-08 10:37:20.762816] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:00.330 [2024-10-08 10:37:20.762901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73769 ] 00:06:00.330 [2024-10-08 10:37:20.885662] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:00.589 [2024-10-08 10:37:20.905739] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.589 [2024-10-08 10:37:20.941000] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.589 [2024-10-08 10:37:20.941041] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.847 Running I/O for 5 seconds... 00:06:06.939 1722.00 IOPS, 107.62 MiB/s [2024-10-08T10:37:27.516Z] 3164.00 IOPS, 197.75 MiB/s 00:06:06.939 Latency(us) 00:06:06.939 [2024-10-08T10:37:27.516Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:06.939 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:06.939 Verification LBA range: start 0x0 length 0xbd0b 00:06:06.939 Nvme0n1 : 5.64 132.37 8.27 0.00 0.00 923519.97 10384.94 987274.63 00:06:06.939 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:06.939 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:06.939 Nvme0n1 : 5.65 133.89 8.37 0.00 0.00 930614.27 19559.98 929199.66 00:06:06.939 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:06.939 Verification LBA range: start 0x0 length 0xa000 00:06:06.939 Nvme1n1 : 5.72 130.25 8.14 0.00 0.00 924574.85 45976.02 1580929.97 00:06:06.939 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:06.939 Verification LBA range: start 0xa000 length 0xa000 00:06:06.939 Nvme1n1 : 5.65 131.03 8.19 0.00 0.00 918731.34 52428.80 987274.63 00:06:06.939 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:06.939 Verification LBA range: start 0x0 length 0x8000 00:06:06.939 Nvme2n1 : 5.77 140.32 8.77 0.00 0.00 841436.97 10788.23 1355082.83 00:06:06.939 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:06.939 Verification LBA range: start 0x8000 length 0x8000 00:06:06.939 Nvme2n1 : 5.65 135.95 8.50 0.00 0.00 873511.91 78643.20 1013085.74 00:06:06.939 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:06.939 Verification LBA range: start 0x0 length 0x8000 00:06:06.939 Nvme2n2 : 5.77 141.68 8.86 0.00 0.00 807903.97 10586.58 1122782.92 00:06:06.939 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:06.939 Verification LBA range: start 0x8000 length 0x8000 00:06:06.939 Nvme2n2 : 5.65 135.91 8.49 0.00 0.00 850315.16 79449.80 1032444.06 00:06:06.939 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:06.939 Verification LBA range: start 0x0 length 0x8000 00:06:06.940 Nvme2n3 : 5.78 141.78 8.86 0.00 0.00 785875.74 32667.18 1729343.80 00:06:06.940 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:06.940 Verification LBA range: start 0x8000 length 0x8000 00:06:06.940 Nvme2n3 : 5.73 145.25 9.08 0.00 0.00 777848.94 32062.23 1064707.94 00:06:06.940 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:06.940 Verification LBA range: start 0x0 length 0x2000 00:06:06.940 Nvme3n1 : 5.81 166.95 10.43 0.00 0.00 653707.26 630.15 1755154.90 00:06:06.940 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:06.940 Verification LBA range: start 0x2000 length 0x2000 00:06:06.940 Nvme3n1 : 5.77 159.67 9.98 0.00 0.00 692566.89 1562.78 1084066.26 00:06:06.940 [2024-10-08T10:37:27.517Z] =================================================================================================================== 00:06:06.940 [2024-10-08T10:37:27.517Z] Total : 1695.06 105.94 0.00 0.00 824337.26 630.15 1755154.90 00:06:07.201 00:06:07.201 real 0m6.944s 00:06:07.201 user 0m13.220s 00:06:07.201 sys 0m0.187s 00:06:07.201 10:37:27 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.201 10:37:27 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:07.201 ************************************ 00:06:07.201 END TEST bdev_verify_big_io 00:06:07.201 ************************************ 00:06:07.201 10:37:27 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:07.201 10:37:27 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:07.201 10:37:27 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.201 10:37:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:07.201 ************************************ 00:06:07.201 START TEST bdev_write_zeroes 00:06:07.201 ************************************ 00:06:07.201 10:37:27 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:07.201 [2024-10-08 10:37:27.772100] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:07.201 [2024-10-08 10:37:27.772206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73858 ] 00:06:07.460 [2024-10-08 10:37:27.900854] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:07.460 [2024-10-08 10:37:27.917892] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.460 [2024-10-08 10:37:27.946072] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.025 Running I/O for 1 seconds... 00:06:08.961 70272.00 IOPS, 274.50 MiB/s 00:06:08.961 Latency(us) 00:06:08.961 [2024-10-08T10:37:29.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:08.961 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:08.961 Nvme0n1 : 1.02 11649.07 45.50 0.00 0.00 10963.55 5747.00 20769.87 00:06:08.961 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:08.961 Nvme1n1 : 1.02 11635.60 45.45 0.00 0.00 10968.01 6755.25 21072.34 00:06:08.961 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:08.961 Nvme2n1 : 1.02 11622.48 45.40 0.00 0.00 10961.40 6956.90 20467.40 00:06:08.961 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:08.961 Nvme2n2 : 1.03 11609.40 45.35 0.00 0.00 10954.90 6956.90 20064.10 00:06:08.961 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:08.961 Nvme2n3 : 1.03 11596.26 45.30 0.00 0.00 10950.73 6755.25 19559.98 00:06:08.961 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:08.961 Nvme3n1 : 1.03 11583.15 45.25 0.00 0.00 10927.57 5595.77 20870.70 00:06:08.961 [2024-10-08T10:37:29.538Z] =================================================================================================================== 00:06:08.961 [2024-10-08T10:37:29.538Z] Total : 69695.96 272.25 0.00 0.00 10954.36 5595.77 21072.34 00:06:08.961 00:06:08.961 real 0m1.814s 00:06:08.961 user 0m1.544s 00:06:08.961 sys 0m0.159s 00:06:08.961 10:37:29 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.961 ************************************ 00:06:08.961 END TEST bdev_write_zeroes 00:06:08.961 10:37:29 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:08.961 ************************************ 00:06:09.221 10:37:29 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:09.221 10:37:29 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:09.221 10:37:29 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.221 10:37:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:09.221 ************************************ 00:06:09.221 START TEST bdev_json_nonenclosed 00:06:09.221 ************************************ 00:06:09.221 10:37:29 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:09.221 [2024-10-08 10:37:29.643573] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:09.221 [2024-10-08 10:37:29.643692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73900 ] 00:06:09.221 [2024-10-08 10:37:29.771053] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:09.221 [2024-10-08 10:37:29.789607] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.482 [2024-10-08 10:37:29.821743] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.482 [2024-10-08 10:37:29.821834] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:09.482 [2024-10-08 10:37:29.821851] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:09.482 [2024-10-08 10:37:29.821863] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:09.482 00:06:09.482 real 0m0.315s 00:06:09.482 user 0m0.117s 00:06:09.482 sys 0m0.096s 00:06:09.482 10:37:29 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.482 ************************************ 00:06:09.482 END TEST bdev_json_nonenclosed 00:06:09.482 ************************************ 00:06:09.482 10:37:29 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:09.482 10:37:29 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:09.482 10:37:29 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:09.482 10:37:29 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.482 10:37:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:09.482 ************************************ 00:06:09.482 START TEST bdev_json_nonarray 00:06:09.482 ************************************ 00:06:09.482 10:37:29 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:09.482 [2024-10-08 10:37:30.018409] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:09.482 [2024-10-08 10:37:30.018525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73931 ] 00:06:09.743 [2024-10-08 10:37:30.146289] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:09.743 [2024-10-08 10:37:30.167902] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.743 [2024-10-08 10:37:30.200106] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.743 [2024-10-08 10:37:30.200203] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:09.743 [2024-10-08 10:37:30.200228] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:09.743 [2024-10-08 10:37:30.200237] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:09.743 00:06:09.743 real 0m0.321s 00:06:09.743 user 0m0.127s 00:06:09.743 sys 0m0.090s 00:06:09.743 10:37:30 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.743 ************************************ 00:06:09.743 END TEST bdev_json_nonarray 00:06:09.743 ************************************ 00:06:09.743 10:37:30 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:10.004 10:37:30 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:06:10.004 10:37:30 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:06:10.004 10:37:30 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:06:10.004 10:37:30 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:10.004 10:37:30 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:06:10.004 10:37:30 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:10.004 10:37:30 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:10.004 10:37:30 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:10.004 10:37:30 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:10.004 10:37:30 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:10.004 10:37:30 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:10.004 00:06:10.004 real 0m29.282s 00:06:10.004 user 0m46.101s 00:06:10.004 sys 0m4.835s 00:06:10.004 10:37:30 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.004 ************************************ 00:06:10.004 10:37:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:10.004 END TEST blockdev_nvme 00:06:10.004 ************************************ 00:06:10.004 10:37:30 -- spdk/autotest.sh@209 -- # uname -s 00:06:10.004 10:37:30 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:10.004 10:37:30 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:10.004 10:37:30 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:10.004 10:37:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.004 10:37:30 -- common/autotest_common.sh@10 -- # set +x 00:06:10.004 ************************************ 00:06:10.004 START TEST blockdev_nvme_gpt 00:06:10.004 ************************************ 00:06:10.004 10:37:30 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:10.004 * Looking for test storage... 00:06:10.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:10.004 10:37:30 blockdev_nvme_gpt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:10.004 10:37:30 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:10.004 10:37:30 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lcov --version 00:06:10.004 10:37:30 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:10.004 10:37:30 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.004 10:37:30 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.004 10:37:30 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.004 10:37:30 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.004 10:37:30 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.004 10:37:30 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.004 10:37:30 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.004 10:37:30 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.004 10:37:30 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.004 10:37:30 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.004 10:37:30 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.004 10:37:30 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:10.004 10:37:30 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:10.004 10:37:30 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.004 10:37:30 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.004 10:37:30 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:10.004 10:37:30 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:10.004 10:37:30 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.004 10:37:30 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:10.004 10:37:30 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.004 10:37:30 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:10.004 10:37:30 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:10.004 10:37:30 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.005 10:37:30 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:10.005 10:37:30 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.005 10:37:30 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.005 10:37:30 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.005 10:37:30 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:10.005 10:37:30 blockdev_nvme_gpt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.005 10:37:30 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:10.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.005 --rc genhtml_branch_coverage=1 00:06:10.005 --rc genhtml_function_coverage=1 00:06:10.005 --rc genhtml_legend=1 00:06:10.005 --rc geninfo_all_blocks=1 00:06:10.005 --rc geninfo_unexecuted_blocks=1 00:06:10.005 00:06:10.005 ' 00:06:10.005 10:37:30 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:10.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.005 --rc genhtml_branch_coverage=1 00:06:10.005 --rc genhtml_function_coverage=1 00:06:10.005 --rc genhtml_legend=1 00:06:10.005 --rc geninfo_all_blocks=1 00:06:10.005 --rc geninfo_unexecuted_blocks=1 00:06:10.005 00:06:10.005 ' 00:06:10.005 10:37:30 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:10.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.005 --rc genhtml_branch_coverage=1 00:06:10.005 --rc genhtml_function_coverage=1 00:06:10.005 --rc genhtml_legend=1 00:06:10.005 --rc geninfo_all_blocks=1 00:06:10.005 --rc geninfo_unexecuted_blocks=1 00:06:10.005 00:06:10.005 ' 00:06:10.005 10:37:30 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:10.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.005 --rc genhtml_branch_coverage=1 00:06:10.005 --rc genhtml_function_coverage=1 00:06:10.005 --rc genhtml_legend=1 00:06:10.005 --rc geninfo_all_blocks=1 00:06:10.005 --rc geninfo_unexecuted_blocks=1 00:06:10.005 00:06:10.005 ' 00:06:10.005 10:37:30 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:10.005 10:37:30 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:10.005 10:37:30 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:10.005 10:37:30 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:10.005 10:37:30 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:10.005 10:37:30 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:10.005 10:37:30 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:10.005 10:37:30 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:10.005 10:37:30 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:10.005 10:37:30 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:10.005 10:37:30 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:10.005 10:37:30 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:10.005 10:37:30 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:06:10.005 10:37:30 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:10.005 10:37:30 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:10.005 10:37:30 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:06:10.005 10:37:30 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:10.005 10:37:30 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:06:10.005 10:37:30 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:10.005 10:37:30 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:10.005 10:37:30 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:10.005 10:37:30 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:06:10.005 10:37:30 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:06:10.005 10:37:30 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:10.005 10:37:30 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74004 00:06:10.005 10:37:30 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:10.005 10:37:30 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 74004 00:06:10.005 10:37:30 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 74004 ']' 00:06:10.005 10:37:30 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.005 10:37:30 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.005 10:37:30 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.005 10:37:30 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.005 10:37:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:10.005 10:37:30 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:10.265 [2024-10-08 10:37:30.617230] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:10.265 [2024-10-08 10:37:30.617353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74004 ] 00:06:10.265 [2024-10-08 10:37:30.746962] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:10.265 [2024-10-08 10:37:30.766243] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.265 [2024-10-08 10:37:30.810700] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.207 10:37:31 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.207 10:37:31 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:06:11.207 10:37:31 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:11.207 10:37:31 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:06:11.207 10:37:31 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:11.207 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:11.468 Waiting for block devices as requested 00:06:11.468 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:11.468 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:11.468 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:11.728 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:17.011 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:17.011 10:37:37 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:17.011 10:37:37 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:06:17.011 10:37:37 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:06:17.011 10:37:37 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:06:17.011 10:37:37 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:17.011 10:37:37 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:06:17.011 10:37:37 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:06:17.011 10:37:37 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:17.011 10:37:37 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:17.011 10:37:37 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:17.011 10:37:37 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:06:17.011 10:37:37 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:06:17.011 10:37:37 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:17.011 10:37:37 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:17.011 10:37:37 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:17.011 10:37:37 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:06:17.011 10:37:37 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:06:17.011 10:37:37 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:17.011 10:37:37 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:17.011 10:37:37 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:17.011 10:37:37 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:06:17.011 10:37:37 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:06:17.011 10:37:37 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:17.011 10:37:37 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:17.011 10:37:37 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:17.011 10:37:37 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:06:17.011 10:37:37 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:06:17.011 10:37:37 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:17.011 10:37:37 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:17.011 10:37:37 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:17.011 10:37:37 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:06:17.011 10:37:37 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:06:17.011 10:37:37 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:17.011 10:37:37 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:17.011 10:37:37 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:17.011 10:37:37 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:06:17.011 10:37:37 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:06:17.011 10:37:37 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:17.011 10:37:37 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:17.011 10:37:37 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:17.011 10:37:37 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:17.011 10:37:37 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:17.011 10:37:37 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:17.011 10:37:37 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:17.011 10:37:37 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:17.011 10:37:37 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:17.011 10:37:37 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:17.011 BYT; 00:06:17.011 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:17.011 10:37:37 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:17.011 BYT; 00:06:17.011 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:17.011 10:37:37 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:17.011 10:37:37 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:17.011 10:37:37 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:17.011 10:37:37 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:17.011 10:37:37 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:17.011 10:37:37 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:17.011 10:37:37 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:17.011 10:37:37 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:17.011 10:37:37 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:17.011 10:37:37 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:17.011 10:37:37 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:17.011 10:37:37 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:17.011 10:37:37 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:17.011 10:37:37 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:17.011 10:37:37 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:17.011 10:37:37 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:17.011 10:37:37 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:17.011 10:37:37 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:17.011 10:37:37 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:17.011 10:37:37 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:17.011 10:37:37 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:17.011 10:37:37 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:17.011 10:37:37 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:17.011 10:37:37 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:17.011 10:37:37 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:17.011 10:37:37 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:17.011 10:37:37 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:17.011 10:37:37 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:17.011 10:37:37 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:17.947 The operation has completed successfully. 00:06:17.947 10:37:38 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:18.920 The operation has completed successfully. 00:06:18.920 10:37:39 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:19.177 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:19.742 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:19.742 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:19.742 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:19.742 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:19.742 10:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:19.742 10:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.742 10:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:19.742 [] 00:06:19.742 10:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.742 10:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:19.742 10:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:19.742 10:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:19.742 10:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:20.001 10:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:20.001 10:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.001 10:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:20.263 10:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.263 10:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:20.264 10:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.264 10:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:20.264 10:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.264 10:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:06:20.264 10:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:20.264 10:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.264 10:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:20.264 10:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.264 10:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:20.264 10:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.264 10:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:20.264 10:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.264 10:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:20.264 10:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.264 10:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:20.264 10:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.264 10:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:20.264 10:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:20.264 10:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:20.264 10:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.264 10:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:20.264 10:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.264 10:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:20.264 10:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:20.264 10:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "9357aa43-5947-499a-8c63-23c5067ec8f7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "9357aa43-5947-499a-8c63-23c5067ec8f7",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "dc6d4c19-af78-4b01-972f-16a1e43ce0a4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dc6d4c19-af78-4b01-972f-16a1e43ce0a4",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "12942381-3536-4b55-9492-15fcf33e780b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "12942381-3536-4b55-9492-15fcf33e780b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "c6d83184-688d-4545-aff8-64870287353f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c6d83184-688d-4545-aff8-64870287353f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "374a1dca-9b85-423b-9f19-002f3d903c3d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "374a1dca-9b85-423b-9f19-002f3d903c3d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:20.264 10:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:20.264 10:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:20.264 10:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:20.264 10:37:40 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 74004 00:06:20.264 10:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 74004 ']' 00:06:20.264 10:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 74004 00:06:20.264 10:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:06:20.264 10:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:20.265 10:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74004 00:06:20.265 killing process with pid 74004 00:06:20.265 10:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:20.265 10:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:20.265 10:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74004' 00:06:20.265 10:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 74004 00:06:20.265 10:37:40 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 74004 00:06:20.834 10:37:41 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:20.834 10:37:41 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:20.834 10:37:41 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:20.834 10:37:41 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.834 10:37:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:20.834 ************************************ 00:06:20.834 START TEST bdev_hello_world 00:06:20.834 ************************************ 00:06:20.834 10:37:41 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:20.834 [2024-10-08 10:37:41.216901] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:20.834 [2024-10-08 10:37:41.217228] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74617 ] 00:06:20.834 [2024-10-08 10:37:41.348558] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:20.834 [2024-10-08 10:37:41.369509] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.095 [2024-10-08 10:37:41.420668] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.356 [2024-10-08 10:37:41.814354] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:21.356 [2024-10-08 10:37:41.814614] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:21.356 [2024-10-08 10:37:41.814649] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:21.356 [2024-10-08 10:37:41.817102] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:21.356 [2024-10-08 10:37:41.817998] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:21.356 [2024-10-08 10:37:41.818037] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:21.356 [2024-10-08 10:37:41.818592] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:21.356 00:06:21.356 [2024-10-08 10:37:41.818625] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:21.616 00:06:21.616 real 0m0.861s 00:06:21.616 user 0m0.546s 00:06:21.616 sys 0m0.209s 00:06:21.616 10:37:42 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.616 ************************************ 00:06:21.616 END TEST bdev_hello_world 00:06:21.616 ************************************ 00:06:21.616 10:37:42 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:21.616 10:37:42 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:21.616 10:37:42 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:21.616 10:37:42 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.617 10:37:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:21.617 ************************************ 00:06:21.617 START TEST bdev_bounds 00:06:21.617 ************************************ 00:06:21.617 10:37:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:06:21.617 Process bdevio pid: 74643 00:06:21.617 10:37:42 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74643 00:06:21.617 10:37:42 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:21.617 10:37:42 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74643' 00:06:21.617 10:37:42 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74643 00:06:21.617 10:37:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 74643 ']' 00:06:21.617 10:37:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.617 10:37:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.617 10:37:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.617 10:37:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.617 10:37:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:21.617 10:37:42 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:21.617 [2024-10-08 10:37:42.150953] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:21.617 [2024-10-08 10:37:42.151103] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74643 ] 00:06:21.877 [2024-10-08 10:37:42.284364] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:21.877 [2024-10-08 10:37:42.302634] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:21.877 [2024-10-08 10:37:42.356766] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.877 [2024-10-08 10:37:42.357091] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.877 [2024-10-08 10:37:42.357170] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.449 10:37:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.449 10:37:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:06:22.449 10:37:43 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:22.712 I/O targets: 00:06:22.712 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:22.712 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:22.712 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:22.712 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:22.712 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:22.712 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:22.712 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:22.712 00:06:22.712 00:06:22.712 CUnit - A unit testing framework for C - Version 2.1-3 00:06:22.712 http://cunit.sourceforge.net/ 00:06:22.712 00:06:22.712 00:06:22.712 Suite: bdevio tests on: Nvme3n1 00:06:22.712 Test: blockdev write read block ...passed 00:06:22.712 Test: blockdev write zeroes read block ...passed 00:06:22.712 Test: blockdev write zeroes read no split ...passed 00:06:22.712 Test: blockdev write zeroes read split ...passed 00:06:22.712 Test: blockdev write zeroes read split partial ...passed 00:06:22.712 Test: blockdev reset ...[2024-10-08 10:37:43.131755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:06:22.712 passed 00:06:22.712 Test: blockdev write read 8 blocks ...[2024-10-08 10:37:43.135228] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:22.712 passed 00:06:22.712 Test: blockdev write read size > 128k ...passed 00:06:22.712 Test: blockdev write read invalid size ...passed 00:06:22.712 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:22.712 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:22.712 Test: blockdev write read max offset ...passed 00:06:22.712 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:22.712 Test: blockdev writev readv 8 blocks ...passed 00:06:22.712 Test: blockdev writev readv 30 x 1block ...passed 00:06:22.712 Test: blockdev writev readv block ...passed 00:06:22.712 Test: blockdev writev readv size > 128k ...passed 00:06:22.712 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:22.712 Test: blockdev comparev and writev ...[2024-10-08 10:37:43.151822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c2c0e000 len:0x1000 00:06:22.712 [2024-10-08 10:37:43.152027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:22.712 passed 00:06:22.712 Test: blockdev nvme passthru rw ...passed 00:06:22.712 Test: blockdev nvme passthru vendor specific ...[2024-10-08 10:37:43.154299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:22.712 [2024-10-08 10:37:43.154359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:22.712 passed 00:06:22.712 Test: blockdev nvme admin passthru ...passed 00:06:22.712 Test: blockdev copy ...passed 00:06:22.712 Suite: bdevio tests on: Nvme2n3 00:06:22.712 Test: blockdev write read block ...passed 00:06:22.712 Test: blockdev write zeroes read block ...passed 00:06:22.712 Test: blockdev write zeroes read no split ...passed 00:06:22.712 Test: blockdev write zeroes read split ...passed 00:06:22.712 Test: blockdev write zeroes read split partial ...passed 00:06:22.712 Test: blockdev reset ...[2024-10-08 10:37:43.184702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:22.712 [2024-10-08 10:37:43.188001] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:22.712 passed 00:06:22.712 Test: blockdev write read 8 blocks ...passed 00:06:22.712 Test: blockdev write read size > 128k ...passed 00:06:22.712 Test: blockdev write read invalid size ...passed 00:06:22.712 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:22.712 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:22.712 Test: blockdev write read max offset ...passed 00:06:22.712 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:22.712 Test: blockdev writev readv 8 blocks ...passed 00:06:22.712 Test: blockdev writev readv 30 x 1block ...passed 00:06:22.712 Test: blockdev writev readv block ...passed 00:06:22.712 Test: blockdev writev readv size > 128k ...passed 00:06:22.712 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:22.712 Test: blockdev comparev and writev ...[2024-10-08 10:37:43.204083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c2c0a000 len:0x1000 00:06:22.712 [2024-10-08 10:37:43.204138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:22.712 passed 00:06:22.712 Test: blockdev nvme passthru rw ...passed 00:06:22.712 Test: blockdev nvme passthru vendor specific ...passed 00:06:22.712 Test: blockdev nvme admin passthru ...[2024-10-08 10:37:43.206732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:22.712 [2024-10-08 10:37:43.206776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:22.712 passed 00:06:22.712 Test: blockdev copy ...passed 00:06:22.712 Suite: bdevio tests on: Nvme2n2 00:06:22.712 Test: blockdev write read block ...passed 00:06:22.712 Test: blockdev write zeroes read block ...passed 00:06:22.712 Test: blockdev write zeroes read no split ...passed 00:06:22.712 Test: blockdev write zeroes read split ...passed 00:06:22.712 Test: blockdev write zeroes read split partial ...passed 00:06:22.712 Test: blockdev reset ...[2024-10-08 10:37:43.237214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:22.712 [2024-10-08 10:37:43.241711] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:22.712 passed 00:06:22.712 Test: blockdev write read 8 blocks ...passed 00:06:22.712 Test: blockdev write read size > 128k ...passed 00:06:22.712 Test: blockdev write read invalid size ...passed 00:06:22.712 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:22.712 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:22.712 Test: blockdev write read max offset ...passed 00:06:22.712 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:22.712 Test: blockdev writev readv 8 blocks ...passed 00:06:22.712 Test: blockdev writev readv 30 x 1block ...passed 00:06:22.712 Test: blockdev writev readv block ...passed 00:06:22.712 Test: blockdev writev readv size > 128k ...passed 00:06:22.712 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:22.712 Test: blockdev comparev and writev ...[2024-10-08 10:37:43.259338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d6c05000 len:0x1000 00:06:22.712 [2024-10-08 10:37:43.259394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:22.712 passed 00:06:22.712 Test: blockdev nvme passthru rw ...passed 00:06:22.712 Test: blockdev nvme passthru vendor specific ...passed 00:06:22.712 Test: blockdev nvme admin passthru ...[2024-10-08 10:37:43.261708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:22.712 [2024-10-08 10:37:43.261751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:22.712 passed 00:06:22.712 Test: blockdev copy ...passed 00:06:22.712 Suite: bdevio tests on: Nvme2n1 00:06:22.712 Test: blockdev write read block ...passed 00:06:22.712 Test: blockdev write zeroes read block ...passed 00:06:22.712 Test: blockdev write zeroes read no split ...passed 00:06:22.712 Test: blockdev write zeroes read split ...passed 00:06:22.975 Test: blockdev write zeroes read split partial ...passed 00:06:22.975 Test: blockdev reset ...[2024-10-08 10:37:43.293726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:22.975 [2024-10-08 10:37:43.298004] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:22.975 passed 00:06:22.975 Test: blockdev write read 8 blocks ...passed 00:06:22.975 Test: blockdev write read size > 128k ...passed 00:06:22.975 Test: blockdev write read invalid size ...passed 00:06:22.975 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:22.975 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:22.975 Test: blockdev write read max offset ...passed 00:06:22.975 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:22.975 Test: blockdev writev readv 8 blocks ...passed 00:06:22.975 Test: blockdev writev readv 30 x 1block ...passed 00:06:22.975 Test: blockdev writev readv block ...passed 00:06:22.975 Test: blockdev writev readv size > 128k ...passed 00:06:22.975 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:22.975 Test: blockdev comparev and writev ...[2024-10-08 10:37:43.314184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c2802000 len:0x1000 00:06:22.975 [2024-10-08 10:37:43.314242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:22.975 passed 00:06:22.975 Test: blockdev nvme passthru rw ...passed 00:06:22.975 Test: blockdev nvme passthru vendor specific ...[2024-10-08 10:37:43.317234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:06:22.975 Test: blockdev nvme admin passthru ...RP2 0x0 00:06:22.975 [2024-10-08 10:37:43.317387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:22.975 passed 00:06:22.975 Test: blockdev copy ...passed 00:06:22.975 Suite: bdevio tests on: Nvme1n1p2 00:06:22.975 Test: blockdev write read block ...passed 00:06:22.975 Test: blockdev write zeroes read block ...passed 00:06:22.975 Test: blockdev write zeroes read no split ...passed 00:06:22.975 Test: blockdev write zeroes read split ...passed 00:06:22.975 Test: blockdev write zeroes read split partial ...passed 00:06:22.975 Test: blockdev reset ...[2024-10-08 10:37:43.350764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:06:22.975 [2024-10-08 10:37:43.354598] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:22.975 passed 00:06:22.975 Test: blockdev write read 8 blocks ...passed 00:06:22.975 Test: blockdev write read size > 128k ...passed 00:06:22.975 Test: blockdev write read invalid size ...passed 00:06:22.975 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:22.975 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:22.975 Test: blockdev write read max offset ...passed 00:06:22.975 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:22.975 Test: blockdev writev readv 8 blocks ...passed 00:06:22.975 Test: blockdev writev readv 30 x 1block ...passed 00:06:22.975 Test: blockdev writev readv block ...passed 00:06:22.975 Test: blockdev writev readv size > 128k ...passed 00:06:22.975 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:22.975 Test: blockdev comparev and writev ...[2024-10-08 10:37:43.372183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2da63b000 len:0x1000 00:06:22.975 [2024-10-08 10:37:43.372374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:22.975 passed 00:06:22.975 Test: blockdev nvme passthru rw ...passed 00:06:22.975 Test: blockdev nvme passthru vendor specific ...passed 00:06:22.975 Test: blockdev nvme admin passthru ...passed 00:06:22.975 Test: blockdev copy ...passed 00:06:22.975 Suite: bdevio tests on: Nvme1n1p1 00:06:22.975 Test: blockdev write read block ...passed 00:06:22.975 Test: blockdev write zeroes read block ...passed 00:06:22.975 Test: blockdev write zeroes read no split ...passed 00:06:22.975 Test: blockdev write zeroes read split ...passed 00:06:22.975 Test: blockdev write zeroes read split partial ...passed 00:06:22.975 Test: blockdev reset ...[2024-10-08 10:37:43.401172] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:06:22.975 [2024-10-08 10:37:43.404451] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:22.975 passed 00:06:22.975 Test: blockdev write read 8 blocks ...passed 00:06:22.975 Test: blockdev write read size > 128k ...passed 00:06:22.975 Test: blockdev write read invalid size ...passed 00:06:22.975 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:22.975 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:22.975 Test: blockdev write read max offset ...passed 00:06:22.975 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:22.975 Test: blockdev writev readv 8 blocks ...passed 00:06:22.975 Test: blockdev writev readv 30 x 1block ...passed 00:06:22.975 Test: blockdev writev readv block ...passed 00:06:22.975 Test: blockdev writev readv size > 128k ...passed 00:06:22.975 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:22.975 Test: blockdev comparev and writev ...[2024-10-08 10:37:43.423463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2da637000 len:0x1000 00:06:22.975 [2024-10-08 10:37:43.423523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:22.975 passed 00:06:22.975 Test: blockdev nvme passthru rw ...passed 00:06:22.975 Test: blockdev nvme passthru vendor specific ...passed 00:06:22.975 Test: blockdev nvme admin passthru ...passed 00:06:22.975 Test: blockdev copy ...passed 00:06:22.975 Suite: bdevio tests on: Nvme0n1 00:06:22.975 Test: blockdev write read block ...passed 00:06:22.975 Test: blockdev write zeroes read block ...passed 00:06:22.975 Test: blockdev write zeroes read no split ...passed 00:06:22.975 Test: blockdev write zeroes read split ...passed 00:06:22.975 Test: blockdev write zeroes read split partial ...passed 00:06:22.975 Test: blockdev reset ...[2024-10-08 10:37:43.448641] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:06:22.975 [2024-10-08 10:37:43.451996] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:22.975 passed 00:06:22.975 Test: blockdev write read 8 blocks ...passed 00:06:22.975 Test: blockdev write read size > 128k ...passed 00:06:22.975 Test: blockdev write read invalid size ...passed 00:06:22.975 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:22.975 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:22.975 Test: blockdev write read max offset ...passed 00:06:22.975 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:22.975 Test: blockdev writev readv 8 blocks ...passed 00:06:22.975 Test: blockdev writev readv 30 x 1block ...passed 00:06:22.975 Test: blockdev writev readv block ...passed 00:06:22.975 Test: blockdev writev readv size > 128k ...passed 00:06:22.975 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:22.975 Test: blockdev comparev and writev ...passed 00:06:22.975 Test: blockdev nvme passthru rw ...[2024-10-08 10:37:43.467745] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:22.975 separate metadata which is not supported yet. 00:06:22.975 passed 00:06:22.975 Test: blockdev nvme passthru vendor specific ...[2024-10-08 10:37:43.470053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 Ppassed 00:06:22.975 Test: blockdev nvme admin passthru ...RP2 0x0 00:06:22.975 [2024-10-08 10:37:43.470437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:22.975 passed 00:06:22.975 Test: blockdev copy ...passed 00:06:22.975 00:06:22.975 Run Summary: Type Total Ran Passed Failed Inactive 00:06:22.975 suites 7 7 n/a 0 0 00:06:22.975 tests 161 161 161 0 0 00:06:22.975 asserts 1025 1025 1025 0 n/a 00:06:22.975 00:06:22.975 Elapsed time = 0.842 seconds 00:06:22.975 0 00:06:22.975 10:37:43 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74643 00:06:22.975 10:37:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 74643 ']' 00:06:22.975 10:37:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 74643 00:06:22.976 10:37:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:06:22.976 10:37:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:22.976 10:37:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74643 00:06:22.976 10:37:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:22.976 10:37:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:22.976 10:37:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74643' 00:06:22.976 killing process with pid 74643 00:06:22.976 10:37:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 74643 00:06:22.976 10:37:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 74643 00:06:23.237 10:37:43 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:23.237 00:06:23.237 real 0m1.654s 00:06:23.237 user 0m3.960s 00:06:23.237 sys 0m0.349s 00:06:23.237 10:37:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.237 ************************************ 00:06:23.237 10:37:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:23.237 END TEST bdev_bounds 00:06:23.237 ************************************ 00:06:23.237 10:37:43 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:23.237 10:37:43 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:23.237 10:37:43 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.237 10:37:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:23.237 ************************************ 00:06:23.237 START TEST bdev_nbd 00:06:23.237 ************************************ 00:06:23.237 10:37:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:23.237 10:37:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:23.499 10:37:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:23.499 10:37:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.499 10:37:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:23.499 10:37:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:23.499 10:37:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:23.499 10:37:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:23.499 10:37:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:23.499 10:37:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:23.499 10:37:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:23.499 10:37:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:23.499 10:37:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:23.499 10:37:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:23.499 10:37:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:23.499 10:37:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:23.499 10:37:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74696 00:06:23.499 10:37:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:23.499 10:37:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74696 /var/tmp/spdk-nbd.sock 00:06:23.499 10:37:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 74696 ']' 00:06:23.499 10:37:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:23.499 10:37:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:23.499 10:37:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:23.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:23.499 10:37:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:23.499 10:37:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:23.499 10:37:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:23.499 [2024-10-08 10:37:43.873308] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:23.499 [2024-10-08 10:37:43.873528] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:23.499 [2024-10-08 10:37:44.012286] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:23.499 [2024-10-08 10:37:44.032773] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.499 [2024-10-08 10:37:44.064954] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.442 10:37:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.442 10:37:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:06:24.442 10:37:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:24.442 10:37:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.442 10:37:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:24.442 10:37:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:24.442 10:37:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:24.442 10:37:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.442 10:37:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:24.442 10:37:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:24.442 10:37:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:24.442 10:37:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:24.442 10:37:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:24.442 10:37:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:24.442 10:37:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:24.442 10:37:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:24.442 10:37:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:24.442 10:37:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:24.442 10:37:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:24.442 10:37:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:24.442 10:37:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:24.442 10:37:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:24.442 10:37:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:24.442 10:37:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:24.442 10:37:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:24.442 10:37:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:24.442 10:37:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:24.442 1+0 records in 00:06:24.442 1+0 records out 00:06:24.442 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00122104 s, 3.4 MB/s 00:06:24.442 10:37:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:24.442 10:37:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:24.442 10:37:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:24.442 10:37:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:24.442 10:37:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:24.442 10:37:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:24.442 10:37:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:24.442 10:37:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:06:24.704 10:37:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:24.704 10:37:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:24.704 10:37:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:24.704 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:24.704 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:24.704 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:24.704 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:24.704 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:24.704 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:24.704 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:24.704 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:24.704 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:24.704 1+0 records in 00:06:24.704 1+0 records out 00:06:24.704 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000870777 s, 4.7 MB/s 00:06:24.704 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:24.704 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:24.704 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:24.704 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:24.704 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:24.704 10:37:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:24.704 10:37:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:24.704 10:37:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:06:24.963 10:37:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:24.963 10:37:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:24.963 10:37:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:24.963 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:06:24.963 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:24.963 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:24.963 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:24.963 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:06:24.963 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:24.963 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:24.963 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:24.963 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:24.963 1+0 records in 00:06:24.963 1+0 records out 00:06:24.963 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110698 s, 3.7 MB/s 00:06:24.963 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:24.963 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:24.963 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:24.963 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:24.963 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:24.963 10:37:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:24.963 10:37:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:24.963 10:37:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:25.224 10:37:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:25.224 10:37:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:25.224 10:37:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:25.224 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:06:25.224 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:25.224 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:25.224 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:25.224 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:06:25.224 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:25.224 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:25.224 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:25.224 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:25.224 1+0 records in 00:06:25.224 1+0 records out 00:06:25.224 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00117318 s, 3.5 MB/s 00:06:25.224 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:25.224 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:25.224 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:25.224 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:25.224 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:25.224 10:37:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:25.224 10:37:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:25.224 10:37:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:25.485 10:37:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:25.485 10:37:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:25.485 10:37:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:25.485 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:06:25.485 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:25.485 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:25.485 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:25.485 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:06:25.485 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:25.485 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:25.485 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:25.485 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:25.485 1+0 records in 00:06:25.485 1+0 records out 00:06:25.485 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104184 s, 3.9 MB/s 00:06:25.485 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:25.485 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:25.485 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:25.485 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:25.485 10:37:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:25.485 10:37:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:25.485 10:37:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:25.485 10:37:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:25.745 10:37:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:25.745 10:37:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:25.745 10:37:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:25.745 10:37:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:06:25.745 10:37:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:25.745 10:37:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:25.745 10:37:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:25.745 10:37:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:06:25.745 10:37:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:25.745 10:37:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:25.745 10:37:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:25.745 10:37:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:25.745 1+0 records in 00:06:25.745 1+0 records out 00:06:25.745 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105664 s, 3.9 MB/s 00:06:25.745 10:37:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:25.745 10:37:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:25.745 10:37:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:25.745 10:37:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:25.745 10:37:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:25.745 10:37:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:25.745 10:37:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:25.745 10:37:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:26.005 10:37:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:06:26.005 10:37:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:06:26.005 10:37:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:06:26.005 10:37:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:06:26.005 10:37:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:26.005 10:37:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:26.005 10:37:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:26.005 10:37:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:06:26.005 10:37:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:26.005 10:37:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:26.005 10:37:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:26.005 10:37:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:26.005 1+0 records in 00:06:26.005 1+0 records out 00:06:26.005 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000971464 s, 4.2 MB/s 00:06:26.005 10:37:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.005 10:37:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:26.005 10:37:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:26.005 10:37:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:26.005 10:37:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:26.005 10:37:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:26.005 10:37:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:26.005 10:37:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.266 10:37:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:26.266 { 00:06:26.266 "nbd_device": "/dev/nbd0", 00:06:26.266 "bdev_name": "Nvme0n1" 00:06:26.266 }, 00:06:26.266 { 00:06:26.266 "nbd_device": "/dev/nbd1", 00:06:26.266 "bdev_name": "Nvme1n1p1" 00:06:26.266 }, 00:06:26.266 { 00:06:26.266 "nbd_device": "/dev/nbd2", 00:06:26.266 "bdev_name": "Nvme1n1p2" 00:06:26.266 }, 00:06:26.266 { 00:06:26.266 "nbd_device": "/dev/nbd3", 00:06:26.266 "bdev_name": "Nvme2n1" 00:06:26.266 }, 00:06:26.266 { 00:06:26.266 "nbd_device": "/dev/nbd4", 00:06:26.266 "bdev_name": "Nvme2n2" 00:06:26.266 }, 00:06:26.266 { 00:06:26.266 "nbd_device": "/dev/nbd5", 00:06:26.266 "bdev_name": "Nvme2n3" 00:06:26.266 }, 00:06:26.266 { 00:06:26.266 "nbd_device": "/dev/nbd6", 00:06:26.266 "bdev_name": "Nvme3n1" 00:06:26.266 } 00:06:26.266 ]' 00:06:26.266 10:37:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:26.266 10:37:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:26.266 { 00:06:26.266 "nbd_device": "/dev/nbd0", 00:06:26.266 "bdev_name": "Nvme0n1" 00:06:26.266 }, 00:06:26.266 { 00:06:26.266 "nbd_device": "/dev/nbd1", 00:06:26.266 "bdev_name": "Nvme1n1p1" 00:06:26.266 }, 00:06:26.266 { 00:06:26.266 "nbd_device": "/dev/nbd2", 00:06:26.266 "bdev_name": "Nvme1n1p2" 00:06:26.266 }, 00:06:26.266 { 00:06:26.266 "nbd_device": "/dev/nbd3", 00:06:26.266 "bdev_name": "Nvme2n1" 00:06:26.266 }, 00:06:26.266 { 00:06:26.266 "nbd_device": "/dev/nbd4", 00:06:26.266 "bdev_name": "Nvme2n2" 00:06:26.266 }, 00:06:26.266 { 00:06:26.266 "nbd_device": "/dev/nbd5", 00:06:26.266 "bdev_name": "Nvme2n3" 00:06:26.266 }, 00:06:26.266 { 00:06:26.266 "nbd_device": "/dev/nbd6", 00:06:26.266 "bdev_name": "Nvme3n1" 00:06:26.266 } 00:06:26.266 ]' 00:06:26.266 10:37:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:26.266 10:37:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:06:26.266 10:37:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.266 10:37:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:06:26.266 10:37:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:26.266 10:37:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:26.266 10:37:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.266 10:37:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:26.266 10:37:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:26.266 10:37:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:26.266 10:37:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:26.266 10:37:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.266 10:37:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.266 10:37:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:26.525 10:37:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:26.525 10:37:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.525 10:37:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.525 10:37:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:26.525 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:26.525 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:26.525 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:26.525 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.525 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.525 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:26.525 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:26.525 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.526 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.526 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:26.786 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:26.786 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:26.786 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:26.786 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.786 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.786 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:26.786 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:26.786 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.786 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.786 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:27.075 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:27.075 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:27.075 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:27.075 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.075 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.075 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:27.075 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:27.075 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.075 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.075 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:27.336 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:27.336 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:27.336 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:27.336 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.336 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.336 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:27.336 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:27.336 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.336 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.336 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:27.336 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:27.336 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:27.336 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:27.336 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.336 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.336 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:27.336 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:27.336 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.336 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.336 10:37:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:06:27.597 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:06:27.597 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:06:27.597 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:06:27.597 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.597 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.597 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:06:27.597 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:27.597 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.597 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.597 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.598 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.858 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:27.859 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:27.859 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.859 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:27.859 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:27.859 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.859 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:27.859 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:27.859 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:27.859 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:27.859 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:27.859 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:27.859 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:27.859 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.859 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:27.859 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:27.859 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:27.859 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:27.859 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:27.859 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.859 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:27.859 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:27.859 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:27.859 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:27.859 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:27.859 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:27.859 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:27.859 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:28.118 /dev/nbd0 00:06:28.118 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:28.118 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:28.118 10:37:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:28.118 10:37:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:28.118 10:37:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:28.118 10:37:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:28.118 10:37:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:28.118 10:37:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:28.118 10:37:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:28.118 10:37:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:28.118 10:37:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:28.118 1+0 records in 00:06:28.118 1+0 records out 00:06:28.118 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000723361 s, 5.7 MB/s 00:06:28.118 10:37:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.118 10:37:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:28.118 10:37:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.118 10:37:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:28.118 10:37:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:28.118 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.118 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:28.118 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:06:28.377 /dev/nbd1 00:06:28.377 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:28.377 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:28.377 10:37:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:28.377 10:37:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:28.377 10:37:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:28.377 10:37:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:28.377 10:37:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:28.377 10:37:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:28.377 10:37:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:28.377 10:37:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:28.377 10:37:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:28.377 1+0 records in 00:06:28.377 1+0 records out 00:06:28.377 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348824 s, 11.7 MB/s 00:06:28.377 10:37:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.377 10:37:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:28.377 10:37:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.377 10:37:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:28.377 10:37:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:28.377 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.377 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:28.377 10:37:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:06:28.635 /dev/nbd10 00:06:28.635 10:37:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:28.635 10:37:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:28.635 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:06:28.635 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:28.635 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:28.635 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:28.635 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:06:28.635 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:28.635 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:28.635 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:28.635 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:28.635 1+0 records in 00:06:28.635 1+0 records out 00:06:28.635 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401341 s, 10.2 MB/s 00:06:28.635 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.635 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:28.635 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.635 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:28.635 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:28.635 10:37:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.635 10:37:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:28.635 10:37:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:06:28.893 /dev/nbd11 00:06:28.893 10:37:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:28.893 10:37:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:28.893 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:06:28.893 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:28.893 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:28.893 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:28.893 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:06:28.893 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:28.893 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:28.893 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:28.893 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:28.893 1+0 records in 00:06:28.893 1+0 records out 00:06:28.893 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460067 s, 8.9 MB/s 00:06:28.893 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.893 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:28.893 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.893 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:28.893 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:28.893 10:37:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.893 10:37:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:28.893 10:37:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:06:29.151 /dev/nbd12 00:06:29.151 10:37:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:29.151 10:37:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:29.151 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:06:29.151 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:29.151 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:29.151 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:29.151 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:06:29.151 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:29.151 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:29.151 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:29.151 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:29.151 1+0 records in 00:06:29.151 1+0 records out 00:06:29.151 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351976 s, 11.6 MB/s 00:06:29.151 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.151 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:29.151 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.151 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:29.151 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:29.151 10:37:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.151 10:37:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:29.151 10:37:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:06:29.410 /dev/nbd13 00:06:29.410 10:37:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:29.410 10:37:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:29.410 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:06:29.410 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:29.410 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:29.410 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:29.410 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:06:29.410 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:29.410 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:29.410 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:29.410 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:29.410 1+0 records in 00:06:29.410 1+0 records out 00:06:29.410 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000481596 s, 8.5 MB/s 00:06:29.410 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.410 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:29.410 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.410 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:29.410 10:37:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:29.410 10:37:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.410 10:37:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:29.410 10:37:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:06:29.668 /dev/nbd14 00:06:29.668 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:06:29.668 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:06:29.668 10:37:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:06:29.668 10:37:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:29.668 10:37:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:29.668 10:37:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:29.668 10:37:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:06:29.668 10:37:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:29.668 10:37:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:29.668 10:37:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:29.668 10:37:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:29.668 1+0 records in 00:06:29.668 1+0 records out 00:06:29.668 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000516618 s, 7.9 MB/s 00:06:29.668 10:37:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.668 10:37:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:29.668 10:37:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:29.668 10:37:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:29.668 10:37:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:29.668 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.668 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:29.668 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.668 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.668 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.668 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:29.668 { 00:06:29.668 "nbd_device": "/dev/nbd0", 00:06:29.668 "bdev_name": "Nvme0n1" 00:06:29.668 }, 00:06:29.668 { 00:06:29.668 "nbd_device": "/dev/nbd1", 00:06:29.668 "bdev_name": "Nvme1n1p1" 00:06:29.668 }, 00:06:29.668 { 00:06:29.668 "nbd_device": "/dev/nbd10", 00:06:29.668 "bdev_name": "Nvme1n1p2" 00:06:29.668 }, 00:06:29.668 { 00:06:29.668 "nbd_device": "/dev/nbd11", 00:06:29.668 "bdev_name": "Nvme2n1" 00:06:29.668 }, 00:06:29.668 { 00:06:29.668 "nbd_device": "/dev/nbd12", 00:06:29.668 "bdev_name": "Nvme2n2" 00:06:29.668 }, 00:06:29.668 { 00:06:29.668 "nbd_device": "/dev/nbd13", 00:06:29.668 "bdev_name": "Nvme2n3" 00:06:29.668 }, 00:06:29.668 { 00:06:29.668 "nbd_device": "/dev/nbd14", 00:06:29.668 "bdev_name": "Nvme3n1" 00:06:29.668 } 00:06:29.668 ]' 00:06:29.668 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:29.668 { 00:06:29.668 "nbd_device": "/dev/nbd0", 00:06:29.668 "bdev_name": "Nvme0n1" 00:06:29.668 }, 00:06:29.668 { 00:06:29.668 "nbd_device": "/dev/nbd1", 00:06:29.668 "bdev_name": "Nvme1n1p1" 00:06:29.668 }, 00:06:29.668 { 00:06:29.668 "nbd_device": "/dev/nbd10", 00:06:29.668 "bdev_name": "Nvme1n1p2" 00:06:29.668 }, 00:06:29.668 { 00:06:29.668 "nbd_device": "/dev/nbd11", 00:06:29.668 "bdev_name": "Nvme2n1" 00:06:29.668 }, 00:06:29.668 { 00:06:29.668 "nbd_device": "/dev/nbd12", 00:06:29.668 "bdev_name": "Nvme2n2" 00:06:29.668 }, 00:06:29.668 { 00:06:29.668 "nbd_device": "/dev/nbd13", 00:06:29.668 "bdev_name": "Nvme2n3" 00:06:29.668 }, 00:06:29.668 { 00:06:29.668 "nbd_device": "/dev/nbd14", 00:06:29.668 "bdev_name": "Nvme3n1" 00:06:29.668 } 00:06:29.668 ]' 00:06:29.668 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.926 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:29.926 /dev/nbd1 00:06:29.926 /dev/nbd10 00:06:29.926 /dev/nbd11 00:06:29.926 /dev/nbd12 00:06:29.926 /dev/nbd13 00:06:29.926 /dev/nbd14' 00:06:29.926 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:29.926 /dev/nbd1 00:06:29.926 /dev/nbd10 00:06:29.926 /dev/nbd11 00:06:29.926 /dev/nbd12 00:06:29.926 /dev/nbd13 00:06:29.926 /dev/nbd14' 00:06:29.926 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.926 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:06:29.926 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:06:29.926 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:06:29.926 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:06:29.926 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:06:29.926 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:29.926 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:29.926 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:29.926 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:29.926 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:29.926 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:29.926 256+0 records in 00:06:29.926 256+0 records out 00:06:29.926 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0078784 s, 133 MB/s 00:06:29.926 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.926 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:29.926 256+0 records in 00:06:29.926 256+0 records out 00:06:29.926 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0580225 s, 18.1 MB/s 00:06:29.926 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.926 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:29.926 256+0 records in 00:06:29.926 256+0 records out 00:06:29.926 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0592985 s, 17.7 MB/s 00:06:29.926 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.926 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:29.926 256+0 records in 00:06:29.926 256+0 records out 00:06:29.926 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0602072 s, 17.4 MB/s 00:06:29.926 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.926 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:30.185 256+0 records in 00:06:30.185 256+0 records out 00:06:30.185 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0600862 s, 17.5 MB/s 00:06:30.185 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.185 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:30.185 256+0 records in 00:06:30.185 256+0 records out 00:06:30.185 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0553192 s, 19.0 MB/s 00:06:30.185 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.185 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:30.185 256+0 records in 00:06:30.185 256+0 records out 00:06:30.185 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0556292 s, 18.8 MB/s 00:06:30.185 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:30.185 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:06:30.185 256+0 records in 00:06:30.185 256+0 records out 00:06:30.185 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.056541 s, 18.5 MB/s 00:06:30.185 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:06:30.185 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:30.185 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:30.185 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:30.185 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:30.185 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:30.185 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:30.185 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.185 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:30.185 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.185 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:30.185 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.185 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:30.185 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.185 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:30.185 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.185 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:30.185 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.185 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:30.185 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:30.185 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:06:30.185 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:30.185 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:30.185 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.185 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:30.185 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:30.185 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:30.185 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.185 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:30.444 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:30.444 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:30.444 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:30.444 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.444 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.444 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:30.444 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:30.444 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.444 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.444 10:37:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:30.702 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:30.702 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:30.702 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:30.702 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.702 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.702 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:30.702 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:30.702 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.702 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.702 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:30.959 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:30.959 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:30.959 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:30.959 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.959 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.959 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:30.959 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:30.959 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.959 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.959 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:31.219 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:31.219 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:31.219 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:31.219 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.219 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.219 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:31.219 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:31.219 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.219 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.219 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:31.219 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:31.480 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:31.480 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:31.480 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.480 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.480 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:31.480 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:31.480 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.480 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.480 10:37:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:31.480 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:31.480 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:31.480 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:31.480 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.480 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.480 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:31.480 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:31.480 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.480 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.480 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:06:31.741 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:06:31.741 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:06:31.741 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:06:31.741 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.741 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.741 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:06:31.741 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:31.741 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.741 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.741 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.741 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.003 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:32.003 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:32.003 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.003 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:32.003 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.003 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:32.003 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:32.003 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:32.003 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:32.003 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:32.003 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:32.003 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:32.003 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:32.003 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.003 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:32.003 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:32.264 malloc_lvol_verify 00:06:32.264 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:32.525 4463639f-54ea-4046-b75e-1e88a80ea32c 00:06:32.525 10:37:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:32.785 6a7adbb6-7cc6-4200-bc79-4b9691e6534d 00:06:32.785 10:37:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:33.042 /dev/nbd0 00:06:33.042 10:37:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:33.042 10:37:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:33.042 10:37:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:33.042 10:37:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:33.042 10:37:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:33.042 mke2fs 1.47.0 (5-Feb-2023) 00:06:33.042 Discarding device blocks: 0/4096 done 00:06:33.042 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:33.042 00:06:33.042 Allocating group tables: 0/1 done 00:06:33.042 Writing inode tables: 0/1 done 00:06:33.042 Creating journal (1024 blocks): done 00:06:33.042 Writing superblocks and filesystem accounting information: 0/1 done 00:06:33.042 00:06:33.042 10:37:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:33.042 10:37:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.042 10:37:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:33.042 10:37:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:33.042 10:37:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:33.042 10:37:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.042 10:37:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:33.042 10:37:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:33.302 10:37:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:33.302 10:37:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:33.302 10:37:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.302 10:37:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.302 10:37:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:33.302 10:37:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:33.302 10:37:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.302 10:37:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74696 00:06:33.302 10:37:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 74696 ']' 00:06:33.302 10:37:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 74696 00:06:33.302 10:37:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:06:33.302 10:37:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:33.302 10:37:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74696 00:06:33.302 10:37:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:33.302 10:37:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:33.302 10:37:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74696' 00:06:33.302 killing process with pid 74696 00:06:33.302 10:37:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 74696 00:06:33.302 10:37:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 74696 00:06:33.302 10:37:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:33.302 00:06:33.302 real 0m10.009s 00:06:33.302 user 0m14.603s 00:06:33.302 sys 0m3.513s 00:06:33.302 10:37:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.302 10:37:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:33.302 ************************************ 00:06:33.302 END TEST bdev_nbd 00:06:33.302 ************************************ 00:06:33.302 10:37:53 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:33.302 10:37:53 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:06:33.302 skipping fio tests on NVMe due to multi-ns failures. 00:06:33.302 10:37:53 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:06:33.302 10:37:53 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:33.302 10:37:53 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:33.302 10:37:53 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:33.302 10:37:53 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:06:33.302 10:37:53 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.302 10:37:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:33.561 ************************************ 00:06:33.561 START TEST bdev_verify 00:06:33.561 ************************************ 00:06:33.561 10:37:53 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:33.561 [2024-10-08 10:37:53.941741] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:33.561 [2024-10-08 10:37:53.941871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75103 ] 00:06:33.561 [2024-10-08 10:37:54.072533] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:33.561 [2024-10-08 10:37:54.091496] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:33.561 [2024-10-08 10:37:54.133944] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.561 [2024-10-08 10:37:54.134055] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.127 Running I/O for 5 seconds... 00:06:36.478 26880.00 IOPS, 105.00 MiB/s [2024-10-08T10:37:57.988Z] 26784.00 IOPS, 104.62 MiB/s [2024-10-08T10:37:58.933Z] 26730.67 IOPS, 104.42 MiB/s [2024-10-08T10:37:59.872Z] 26320.00 IOPS, 102.81 MiB/s [2024-10-08T10:37:59.872Z] 25139.20 IOPS, 98.20 MiB/s 00:06:39.295 Latency(us) 00:06:39.295 [2024-10-08T10:37:59.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:39.295 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:39.295 Verification LBA range: start 0x0 length 0xbd0bd 00:06:39.295 Nvme0n1 : 5.07 1918.95 7.50 0.00 0.00 66550.30 12603.08 72593.72 00:06:39.295 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:39.295 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:39.295 Nvme0n1 : 5.05 1622.65 6.34 0.00 0.00 78593.18 13208.02 75013.51 00:06:39.295 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:39.295 Verification LBA range: start 0x0 length 0x4ff80 00:06:39.295 Nvme1n1p1 : 5.07 1916.99 7.49 0.00 0.00 66506.09 14115.45 69770.63 00:06:39.295 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:39.295 Verification LBA range: start 0x4ff80 length 0x4ff80 00:06:39.295 Nvme1n1p1 : 5.05 1622.16 6.34 0.00 0.00 78459.64 15325.34 73400.32 00:06:39.295 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:39.295 Verification LBA range: start 0x0 length 0x4ff7f 00:06:39.295 Nvme1n1p2 : 5.08 1916.43 7.49 0.00 0.00 66410.58 15123.69 67754.14 00:06:39.295 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:39.295 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:06:39.295 Nvme1n1p2 : 5.05 1621.66 6.33 0.00 0.00 78334.23 16938.54 74206.92 00:06:39.295 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:39.295 Verification LBA range: start 0x0 length 0x80000 00:06:39.295 Nvme2n1 : 5.08 1915.94 7.48 0.00 0.00 66305.93 16232.76 64124.46 00:06:39.295 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:39.295 Verification LBA range: start 0x80000 length 0x80000 00:06:39.295 Nvme2n1 : 5.07 1628.63 6.36 0.00 0.00 77869.00 5343.70 69367.34 00:06:39.295 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:39.295 Verification LBA range: start 0x0 length 0x80000 00:06:39.295 Nvme2n2 : 5.08 1915.45 7.48 0.00 0.00 66217.45 15627.82 64931.05 00:06:39.295 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:39.295 Verification LBA range: start 0x80000 length 0x80000 00:06:39.295 Nvme2n2 : 5.08 1637.14 6.40 0.00 0.00 77421.27 8368.44 71787.13 00:06:39.295 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:39.295 Verification LBA range: start 0x0 length 0x80000 00:06:39.295 Nvme2n3 : 5.08 1914.91 7.48 0.00 0.00 66123.60 13611.32 66544.25 00:06:39.295 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:39.295 Verification LBA range: start 0x80000 length 0x80000 00:06:39.295 Nvme2n3 : 5.08 1636.38 6.39 0.00 0.00 77289.80 9729.58 73803.62 00:06:39.295 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:39.295 Verification LBA range: start 0x0 length 0x20000 00:06:39.295 Nvme3n1 : 5.08 1914.39 7.48 0.00 0.00 66053.63 9074.22 70577.23 00:06:39.295 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:39.295 Verification LBA range: start 0x20000 length 0x20000 00:06:39.295 Nvme3n1 : 5.09 1635.95 6.39 0.00 0.00 77225.67 8721.33 75820.11 00:06:39.295 [2024-10-08T10:37:59.872Z] =================================================================================================================== 00:06:39.295 [2024-10-08T10:37:59.872Z] Total : 24817.64 96.94 0.00 0.00 71621.70 5343.70 75820.11 00:06:39.867 00:06:39.867 real 0m6.412s 00:06:39.867 user 0m12.036s 00:06:39.867 sys 0m0.225s 00:06:39.867 10:38:00 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.867 ************************************ 00:06:39.867 END TEST bdev_verify 00:06:39.867 ************************************ 00:06:39.867 10:38:00 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:39.867 10:38:00 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:39.867 10:38:00 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:06:39.867 10:38:00 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.867 10:38:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:39.867 ************************************ 00:06:39.867 START TEST bdev_verify_big_io 00:06:39.867 ************************************ 00:06:39.867 10:38:00 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:39.867 [2024-10-08 10:38:00.427661] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:39.867 [2024-10-08 10:38:00.427821] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75190 ] 00:06:40.128 [2024-10-08 10:38:00.560881] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:40.128 [2024-10-08 10:38:00.581367] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:40.128 [2024-10-08 10:38:00.631993] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.128 [2024-10-08 10:38:00.632103] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.699 Running I/O for 5 seconds... 00:06:44.930 362.00 IOPS, 22.62 MiB/s [2024-10-08T10:38:07.413Z] 1846.00 IOPS, 115.38 MiB/s [2024-10-08T10:38:07.414Z] 2327.33 IOPS, 145.46 MiB/s 00:06:46.837 Latency(us) 00:06:46.837 [2024-10-08T10:38:07.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:46.837 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:46.837 Verification LBA range: start 0x0 length 0xbd0b 00:06:46.837 Nvme0n1 : 6.13 70.45 4.40 0.00 0.00 1710106.37 24903.68 1897115.96 00:06:46.837 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:46.837 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:46.837 Nvme0n1 : 5.64 118.63 7.41 0.00 0.00 1035035.44 20467.40 1238932.87 00:06:46.837 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:46.837 Verification LBA range: start 0x0 length 0x4ff8 00:06:46.837 Nvme1n1p1 : 5.93 113.18 7.07 0.00 0.00 1056452.94 89532.26 1058255.16 00:06:46.837 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:46.837 Verification LBA range: start 0x4ff8 length 0x4ff8 00:06:46.837 Nvme1n1p1 : 5.77 121.76 7.61 0.00 0.00 980584.93 105664.20 1071160.71 00:06:46.837 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:46.837 Verification LBA range: start 0x0 length 0x4ff7 00:06:46.837 Nvme1n1p2 : 5.93 112.31 7.02 0.00 0.00 1026815.44 147607.24 1013085.74 00:06:46.837 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:46.837 Verification LBA range: start 0x4ff7 length 0x4ff7 00:06:46.837 Nvme1n1p2 : 5.92 125.57 7.85 0.00 0.00 920967.81 87112.47 961463.53 00:06:46.837 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:46.837 Verification LBA range: start 0x0 length 0x8000 00:06:46.837 Nvme2n1 : 6.00 117.38 7.34 0.00 0.00 963250.02 64527.75 1032444.06 00:06:46.837 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:46.837 Verification LBA range: start 0x8000 length 0x8000 00:06:46.837 Nvme2n1 : 5.93 121.15 7.57 0.00 0.00 931904.51 61704.66 1910021.51 00:06:46.837 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:46.837 Verification LBA range: start 0x0 length 0x8000 00:06:46.837 Nvme2n2 : 6.08 122.20 7.64 0.00 0.00 898163.99 25710.28 1058255.16 00:06:46.837 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:46.837 Verification LBA range: start 0x8000 length 0x8000 00:06:46.837 Nvme2n2 : 5.98 125.73 7.86 0.00 0.00 870775.81 52428.80 1961643.72 00:06:46.837 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:46.837 Verification LBA range: start 0x0 length 0x8000 00:06:46.837 Nvme2n3 : 6.13 125.52 7.85 0.00 0.00 844885.01 51420.55 1084066.26 00:06:46.837 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:46.837 Verification LBA range: start 0x8000 length 0x8000 00:06:46.837 Nvme2n3 : 6.03 134.87 8.43 0.00 0.00 787715.93 24702.03 2000360.37 00:06:46.837 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:46.837 Verification LBA range: start 0x0 length 0x2000 00:06:46.837 Nvme3n1 : 6.14 142.43 8.90 0.00 0.00 730783.27 2608.84 1109877.37 00:06:46.837 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:46.837 Verification LBA range: start 0x2000 length 0x2000 00:06:46.837 Nvme3n1 : 6.14 163.99 10.25 0.00 0.00 629932.62 431.66 2039077.02 00:06:46.837 [2024-10-08T10:38:07.414Z] =================================================================================================================== 00:06:46.837 [2024-10-08T10:38:07.414Z] Total : 1715.18 107.20 0.00 0.00 918733.97 431.66 2039077.02 00:06:47.771 00:06:47.771 real 0m7.807s 00:06:47.771 user 0m14.744s 00:06:47.771 sys 0m0.308s 00:06:47.771 10:38:08 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.771 ************************************ 00:06:47.771 END TEST bdev_verify_big_io 00:06:47.771 ************************************ 00:06:47.771 10:38:08 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:47.772 10:38:08 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:47.772 10:38:08 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:47.772 10:38:08 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.772 10:38:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:47.772 ************************************ 00:06:47.772 START TEST bdev_write_zeroes 00:06:47.772 ************************************ 00:06:47.772 10:38:08 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:47.772 [2024-10-08 10:38:08.295712] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:47.772 [2024-10-08 10:38:08.295877] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75295 ] 00:06:48.033 [2024-10-08 10:38:08.428751] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:48.033 [2024-10-08 10:38:08.448895] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.033 [2024-10-08 10:38:08.498260] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.601 Running I/O for 1 seconds... 00:06:49.537 62208.00 IOPS, 243.00 MiB/s 00:06:49.537 Latency(us) 00:06:49.537 [2024-10-08T10:38:10.114Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:49.537 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:49.537 Nvme0n1 : 1.03 8842.43 34.54 0.00 0.00 14442.93 6856.07 27021.00 00:06:49.537 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:49.537 Nvme1n1p1 : 1.03 8831.45 34.50 0.00 0.00 14436.26 11292.36 26416.05 00:06:49.537 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:49.537 Nvme1n1p2 : 1.03 8820.58 34.46 0.00 0.00 14380.23 11292.36 25004.50 00:06:49.537 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:49.537 Nvme2n1 : 1.03 8810.65 34.42 0.00 0.00 14342.28 11241.94 23794.61 00:06:49.537 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:49.537 Nvme2n2 : 1.03 8800.76 34.38 0.00 0.00 14312.46 9679.16 23895.43 00:06:49.537 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:49.537 Nvme2n3 : 1.03 8790.55 34.34 0.00 0.00 14280.41 7158.55 25811.10 00:06:49.537 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:49.537 Nvme3n1 : 1.03 8718.88 34.06 0.00 0.00 14373.15 10233.70 27222.65 00:06:49.537 [2024-10-08T10:38:10.114Z] =================================================================================================================== 00:06:49.537 [2024-10-08T10:38:10.114Z] Total : 61615.30 240.68 0.00 0.00 14366.81 6856.07 27222.65 00:06:49.798 00:06:49.798 real 0m1.948s 00:06:49.798 user 0m1.609s 00:06:49.798 sys 0m0.225s 00:06:49.798 10:38:10 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.798 ************************************ 00:06:49.798 END TEST bdev_write_zeroes 00:06:49.798 ************************************ 00:06:49.798 10:38:10 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:49.798 10:38:10 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:49.798 10:38:10 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:49.798 10:38:10 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.798 10:38:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:49.798 ************************************ 00:06:49.798 START TEST bdev_json_nonenclosed 00:06:49.798 ************************************ 00:06:49.798 10:38:10 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:49.798 [2024-10-08 10:38:10.295290] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:49.798 [2024-10-08 10:38:10.295432] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75337 ] 00:06:50.059 [2024-10-08 10:38:10.427118] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:50.059 [2024-10-08 10:38:10.448569] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.059 [2024-10-08 10:38:10.498883] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.059 [2024-10-08 10:38:10.498997] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:50.059 [2024-10-08 10:38:10.499019] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:50.059 [2024-10-08 10:38:10.499030] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:50.059 00:06:50.059 real 0m0.379s 00:06:50.059 user 0m0.158s 00:06:50.059 sys 0m0.116s 00:06:50.059 10:38:10 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:50.059 ************************************ 00:06:50.059 END TEST bdev_json_nonenclosed 00:06:50.059 ************************************ 00:06:50.059 10:38:10 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:50.320 10:38:10 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:50.320 10:38:10 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:50.320 10:38:10 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.320 10:38:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:50.320 ************************************ 00:06:50.320 START TEST bdev_json_nonarray 00:06:50.320 ************************************ 00:06:50.320 10:38:10 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:50.320 [2024-10-08 10:38:10.736089] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:50.320 [2024-10-08 10:38:10.736226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75357 ] 00:06:50.320 [2024-10-08 10:38:10.867951] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:50.320 [2024-10-08 10:38:10.889006] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.580 [2024-10-08 10:38:10.939520] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.580 [2024-10-08 10:38:10.939640] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:50.580 [2024-10-08 10:38:10.939660] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:50.580 [2024-10-08 10:38:10.939670] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:50.580 00:06:50.580 real 0m0.375s 00:06:50.580 user 0m0.153s 00:06:50.580 sys 0m0.117s 00:06:50.580 10:38:11 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:50.580 ************************************ 00:06:50.580 END TEST bdev_json_nonarray 00:06:50.580 ************************************ 00:06:50.580 10:38:11 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:50.580 10:38:11 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:06:50.580 10:38:11 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:06:50.580 10:38:11 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:06:50.580 10:38:11 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:50.580 10:38:11 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.580 10:38:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:50.580 ************************************ 00:06:50.580 START TEST bdev_gpt_uuid 00:06:50.580 ************************************ 00:06:50.580 10:38:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:06:50.580 10:38:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:06:50.580 10:38:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:06:50.580 10:38:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=75383 00:06:50.580 10:38:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:50.580 10:38:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 75383 00:06:50.580 10:38:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 75383 ']' 00:06:50.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.580 10:38:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.580 10:38:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:50.580 10:38:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.580 10:38:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:50.580 10:38:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:50.580 10:38:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:50.840 [2024-10-08 10:38:11.181394] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:50.840 [2024-10-08 10:38:11.181529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75383 ] 00:06:50.840 [2024-10-08 10:38:11.313378] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:50.840 [2024-10-08 10:38:11.328556] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.840 [2024-10-08 10:38:11.378890] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.782 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.782 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:06:51.782 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:51.782 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.782 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:51.782 Some configs were skipped because the RPC state that can call them passed over. 00:06:51.782 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.782 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:06:51.782 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.782 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:52.043 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.043 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:06:52.043 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.043 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:52.043 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.043 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:06:52.043 { 00:06:52.043 "name": "Nvme1n1p1", 00:06:52.043 "aliases": [ 00:06:52.043 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:06:52.043 ], 00:06:52.043 "product_name": "GPT Disk", 00:06:52.043 "block_size": 4096, 00:06:52.043 "num_blocks": 655104, 00:06:52.043 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:06:52.043 "assigned_rate_limits": { 00:06:52.043 "rw_ios_per_sec": 0, 00:06:52.043 "rw_mbytes_per_sec": 0, 00:06:52.043 "r_mbytes_per_sec": 0, 00:06:52.043 "w_mbytes_per_sec": 0 00:06:52.043 }, 00:06:52.043 "claimed": false, 00:06:52.043 "zoned": false, 00:06:52.043 "supported_io_types": { 00:06:52.043 "read": true, 00:06:52.043 "write": true, 00:06:52.043 "unmap": true, 00:06:52.043 "flush": true, 00:06:52.043 "reset": true, 00:06:52.043 "nvme_admin": false, 00:06:52.043 "nvme_io": false, 00:06:52.043 "nvme_io_md": false, 00:06:52.043 "write_zeroes": true, 00:06:52.043 "zcopy": false, 00:06:52.043 "get_zone_info": false, 00:06:52.043 "zone_management": false, 00:06:52.043 "zone_append": false, 00:06:52.043 "compare": true, 00:06:52.043 "compare_and_write": false, 00:06:52.043 "abort": true, 00:06:52.043 "seek_hole": false, 00:06:52.043 "seek_data": false, 00:06:52.043 "copy": true, 00:06:52.043 "nvme_iov_md": false 00:06:52.043 }, 00:06:52.043 "driver_specific": { 00:06:52.043 "gpt": { 00:06:52.043 "base_bdev": "Nvme1n1", 00:06:52.043 "offset_blocks": 256, 00:06:52.043 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:06:52.043 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:06:52.043 "partition_name": "SPDK_TEST_first" 00:06:52.043 } 00:06:52.043 } 00:06:52.043 } 00:06:52.043 ]' 00:06:52.043 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:06:52.043 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:06:52.043 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:06:52.043 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:06:52.043 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:06:52.043 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:06:52.043 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:52.043 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.043 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:52.043 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.043 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:06:52.043 { 00:06:52.043 "name": "Nvme1n1p2", 00:06:52.043 "aliases": [ 00:06:52.043 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:06:52.043 ], 00:06:52.043 "product_name": "GPT Disk", 00:06:52.043 "block_size": 4096, 00:06:52.043 "num_blocks": 655103, 00:06:52.043 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:06:52.043 "assigned_rate_limits": { 00:06:52.043 "rw_ios_per_sec": 0, 00:06:52.043 "rw_mbytes_per_sec": 0, 00:06:52.043 "r_mbytes_per_sec": 0, 00:06:52.043 "w_mbytes_per_sec": 0 00:06:52.043 }, 00:06:52.043 "claimed": false, 00:06:52.043 "zoned": false, 00:06:52.043 "supported_io_types": { 00:06:52.043 "read": true, 00:06:52.043 "write": true, 00:06:52.043 "unmap": true, 00:06:52.043 "flush": true, 00:06:52.043 "reset": true, 00:06:52.043 "nvme_admin": false, 00:06:52.043 "nvme_io": false, 00:06:52.043 "nvme_io_md": false, 00:06:52.043 "write_zeroes": true, 00:06:52.043 "zcopy": false, 00:06:52.043 "get_zone_info": false, 00:06:52.043 "zone_management": false, 00:06:52.043 "zone_append": false, 00:06:52.043 "compare": true, 00:06:52.043 "compare_and_write": false, 00:06:52.043 "abort": true, 00:06:52.044 "seek_hole": false, 00:06:52.044 "seek_data": false, 00:06:52.044 "copy": true, 00:06:52.044 "nvme_iov_md": false 00:06:52.044 }, 00:06:52.044 "driver_specific": { 00:06:52.044 "gpt": { 00:06:52.044 "base_bdev": "Nvme1n1", 00:06:52.044 "offset_blocks": 655360, 00:06:52.044 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:06:52.044 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:06:52.044 "partition_name": "SPDK_TEST_second" 00:06:52.044 } 00:06:52.044 } 00:06:52.044 } 00:06:52.044 ]' 00:06:52.044 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:06:52.044 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:06:52.044 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:06:52.044 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:06:52.044 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:06:52.044 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:06:52.044 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 75383 00:06:52.044 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 75383 ']' 00:06:52.044 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 75383 00:06:52.044 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:06:52.044 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:52.044 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75383 00:06:52.044 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:52.044 killing process with pid 75383 00:06:52.044 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:52.044 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75383' 00:06:52.044 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 75383 00:06:52.044 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 75383 00:06:52.614 00:06:52.614 real 0m1.826s 00:06:52.614 user 0m1.972s 00:06:52.614 sys 0m0.389s 00:06:52.614 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:52.614 ************************************ 00:06:52.614 END TEST bdev_gpt_uuid 00:06:52.614 ************************************ 00:06:52.614 10:38:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:06:52.615 10:38:12 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:06:52.615 10:38:12 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:52.615 10:38:12 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:06:52.615 10:38:12 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:52.615 10:38:12 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:52.615 10:38:12 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:06:52.615 10:38:12 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:06:52.615 10:38:12 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:06:52.615 10:38:12 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:52.875 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:52.876 Waiting for block devices as requested 00:06:52.876 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:53.137 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:53.137 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:53.137 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:58.455 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:58.455 10:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:06:58.455 10:38:18 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:06:58.716 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:06:58.716 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:06:58.716 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:58.716 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:58.716 10:38:19 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:06:58.716 ************************************ 00:06:58.716 END TEST blockdev_nvme_gpt 00:06:58.716 ************************************ 00:06:58.716 00:06:58.716 real 0m48.657s 00:06:58.716 user 1m1.476s 00:06:58.716 sys 0m7.919s 00:06:58.716 10:38:19 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.716 10:38:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:58.716 10:38:19 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:06:58.716 10:38:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.716 10:38:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.716 10:38:19 -- common/autotest_common.sh@10 -- # set +x 00:06:58.716 ************************************ 00:06:58.716 START TEST nvme 00:06:58.716 ************************************ 00:06:58.716 10:38:19 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:06:58.716 * Looking for test storage... 00:06:58.716 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:06:58.716 10:38:19 nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:58.716 10:38:19 nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:06:58.716 10:38:19 nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:58.716 10:38:19 nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:58.716 10:38:19 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.716 10:38:19 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.716 10:38:19 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.716 10:38:19 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.716 10:38:19 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.716 10:38:19 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.716 10:38:19 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.716 10:38:19 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.716 10:38:19 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.716 10:38:19 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.716 10:38:19 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.716 10:38:19 nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:58.716 10:38:19 nvme -- scripts/common.sh@345 -- # : 1 00:06:58.716 10:38:19 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.716 10:38:19 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.716 10:38:19 nvme -- scripts/common.sh@365 -- # decimal 1 00:06:58.716 10:38:19 nvme -- scripts/common.sh@353 -- # local d=1 00:06:58.716 10:38:19 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.716 10:38:19 nvme -- scripts/common.sh@355 -- # echo 1 00:06:58.716 10:38:19 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.716 10:38:19 nvme -- scripts/common.sh@366 -- # decimal 2 00:06:58.716 10:38:19 nvme -- scripts/common.sh@353 -- # local d=2 00:06:58.716 10:38:19 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.716 10:38:19 nvme -- scripts/common.sh@355 -- # echo 2 00:06:58.716 10:38:19 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.716 10:38:19 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.716 10:38:19 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.716 10:38:19 nvme -- scripts/common.sh@368 -- # return 0 00:06:58.716 10:38:19 nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.716 10:38:19 nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:58.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.716 --rc genhtml_branch_coverage=1 00:06:58.716 --rc genhtml_function_coverage=1 00:06:58.716 --rc genhtml_legend=1 00:06:58.716 --rc geninfo_all_blocks=1 00:06:58.716 --rc geninfo_unexecuted_blocks=1 00:06:58.716 00:06:58.716 ' 00:06:58.716 10:38:19 nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:58.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.716 --rc genhtml_branch_coverage=1 00:06:58.716 --rc genhtml_function_coverage=1 00:06:58.716 --rc genhtml_legend=1 00:06:58.716 --rc geninfo_all_blocks=1 00:06:58.716 --rc geninfo_unexecuted_blocks=1 00:06:58.716 00:06:58.716 ' 00:06:58.716 10:38:19 nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:58.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.716 --rc genhtml_branch_coverage=1 00:06:58.716 --rc genhtml_function_coverage=1 00:06:58.716 --rc genhtml_legend=1 00:06:58.716 --rc geninfo_all_blocks=1 00:06:58.716 --rc geninfo_unexecuted_blocks=1 00:06:58.716 00:06:58.716 ' 00:06:58.716 10:38:19 nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:58.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.716 --rc genhtml_branch_coverage=1 00:06:58.716 --rc genhtml_function_coverage=1 00:06:58.716 --rc genhtml_legend=1 00:06:58.716 --rc geninfo_all_blocks=1 00:06:58.716 --rc geninfo_unexecuted_blocks=1 00:06:58.716 00:06:58.716 ' 00:06:58.716 10:38:19 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:59.288 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:59.858 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:59.858 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:59.858 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:59.858 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:00.119 10:38:20 nvme -- nvme/nvme.sh@79 -- # uname 00:07:00.119 10:38:20 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:00.119 10:38:20 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:00.119 10:38:20 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:00.119 10:38:20 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:00.119 10:38:20 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:07:00.119 10:38:20 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:07:00.119 10:38:20 nvme -- common/autotest_common.sh@1071 -- # stubpid=76008 00:07:00.119 Waiting for stub to ready for secondary processes... 00:07:00.119 10:38:20 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:00.119 10:38:20 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:07:00.119 10:38:20 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:00.119 10:38:20 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/76008 ]] 00:07:00.119 10:38:20 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:07:00.119 [2024-10-08 10:38:20.484848] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:07:00.119 [2024-10-08 10:38:20.485017] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:01.061 [2024-10-08 10:38:21.323757] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:01.062 [2024-10-08 10:38:21.342539] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:01.062 [2024-10-08 10:38:21.370382] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.062 [2024-10-08 10:38:21.370677] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:07:01.062 [2024-10-08 10:38:21.370741] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.062 [2024-10-08 10:38:21.384861] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:01.062 [2024-10-08 10:38:21.384919] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:01.062 [2024-10-08 10:38:21.398528] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:01.062 [2024-10-08 10:38:21.398728] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:01.062 [2024-10-08 10:38:21.400393] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:01.062 [2024-10-08 10:38:21.400822] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:01.062 [2024-10-08 10:38:21.400961] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:01.062 [2024-10-08 10:38:21.402605] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:01.062 [2024-10-08 10:38:21.403082] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:01.062 [2024-10-08 10:38:21.403273] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:01.062 [2024-10-08 10:38:21.405919] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:01.062 [2024-10-08 10:38:21.406339] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:01.062 [2024-10-08 10:38:21.406473] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:01.062 [2024-10-08 10:38:21.406611] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:01.062 [2024-10-08 10:38:21.406745] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:01.062 done. 00:07:01.062 10:38:21 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:01.062 10:38:21 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:07:01.062 10:38:21 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:01.062 10:38:21 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:07:01.062 10:38:21 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.062 10:38:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.062 ************************************ 00:07:01.062 START TEST nvme_reset 00:07:01.062 ************************************ 00:07:01.062 10:38:21 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:01.324 Initializing NVMe Controllers 00:07:01.324 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:01.324 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:01.324 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:01.324 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:01.324 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:01.324 00:07:01.324 real 0m0.246s 00:07:01.324 user 0m0.062s 00:07:01.324 sys 0m0.122s 00:07:01.324 10:38:21 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.324 ************************************ 00:07:01.324 END TEST nvme_reset 00:07:01.324 ************************************ 00:07:01.324 10:38:21 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:01.324 10:38:21 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:01.324 10:38:21 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:01.324 10:38:21 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.324 10:38:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.324 ************************************ 00:07:01.324 START TEST nvme_identify 00:07:01.324 ************************************ 00:07:01.324 10:38:21 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:07:01.324 10:38:21 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:01.324 10:38:21 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:01.324 10:38:21 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:01.324 10:38:21 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:01.324 10:38:21 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:01.324 10:38:21 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:07:01.324 10:38:21 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:01.324 10:38:21 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:01.324 10:38:21 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:01.324 10:38:21 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:07:01.324 10:38:21 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:01.324 10:38:21 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:01.589 [2024-10-08 10:38:21.995867] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 76029 terminated unexpected 00:07:01.589 ===================================================== 00:07:01.589 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:01.589 ===================================================== 00:07:01.589 Controller Capabilities/Features 00:07:01.589 ================================ 00:07:01.589 Vendor ID: 1b36 00:07:01.589 Subsystem Vendor ID: 1af4 00:07:01.589 Serial Number: 12343 00:07:01.589 Model Number: QEMU NVMe Ctrl 00:07:01.589 Firmware Version: 8.0.0 00:07:01.589 Recommended Arb Burst: 6 00:07:01.589 IEEE OUI Identifier: 00 54 52 00:07:01.589 Multi-path I/O 00:07:01.589 May have multiple subsystem ports: No 00:07:01.589 May have multiple controllers: Yes 00:07:01.589 Associated with SR-IOV VF: No 00:07:01.589 Max Data Transfer Size: 524288 00:07:01.589 Max Number of Namespaces: 256 00:07:01.589 Max Number of I/O Queues: 64 00:07:01.589 NVMe Specification Version (VS): 1.4 00:07:01.589 NVMe Specification Version (Identify): 1.4 00:07:01.589 Maximum Queue Entries: 2048 00:07:01.589 Contiguous Queues Required: Yes 00:07:01.589 Arbitration Mechanisms Supported 00:07:01.589 Weighted Round Robin: Not Supported 00:07:01.589 Vendor Specific: Not Supported 00:07:01.589 Reset Timeout: 7500 ms 00:07:01.589 Doorbell Stride: 4 bytes 00:07:01.589 NVM Subsystem Reset: Not Supported 00:07:01.589 Command Sets Supported 00:07:01.589 NVM Command Set: Supported 00:07:01.589 Boot Partition: Not Supported 00:07:01.589 Memory Page Size Minimum: 4096 bytes 00:07:01.589 Memory Page Size Maximum: 65536 bytes 00:07:01.589 Persistent Memory Region: Not Supported 00:07:01.589 Optional Asynchronous Events Supported 00:07:01.589 Namespace Attribute Notices: Supported 00:07:01.589 Firmware Activation Notices: Not Supported 00:07:01.589 ANA Change Notices: Not Supported 00:07:01.589 PLE Aggregate Log Change Notices: Not Supported 00:07:01.589 LBA Status Info Alert Notices: Not Supported 00:07:01.589 EGE Aggregate Log Change Notices: Not Supported 00:07:01.589 Normal NVM Subsystem Shutdown event: Not Supported 00:07:01.589 Zone Descriptor Change Notices: Not Supported 00:07:01.589 Discovery Log Change Notices: Not Supported 00:07:01.589 Controller Attributes 00:07:01.589 128-bit Host Identifier: Not Supported 00:07:01.589 Non-Operational Permissive Mode: Not Supported 00:07:01.589 NVM Sets: Not Supported 00:07:01.589 Read Recovery Levels: Not Supported 00:07:01.589 Endurance Groups: Supported 00:07:01.589 Predictable Latency Mode: Not Supported 00:07:01.589 Traffic Based Keep ALive: Not Supported 00:07:01.589 Namespace Granularity: Not Supported 00:07:01.589 SQ Associations: Not Supported 00:07:01.589 UUID List: Not Supported 00:07:01.589 Multi-Domain Subsystem: Not Supported 00:07:01.589 Fixed Capacity Management: Not Supported 00:07:01.589 Variable Capacity Management: Not Supported 00:07:01.589 Delete Endurance Group: Not Supported 00:07:01.589 Delete NVM Set: Not Supported 00:07:01.589 Extended LBA Formats Supported: Supported 00:07:01.589 Flexible Data Placement Supported: Supported 00:07:01.589 00:07:01.589 Controller Memory Buffer Support 00:07:01.589 ================================ 00:07:01.589 Supported: No 00:07:01.589 00:07:01.589 Persistent Memory Region Support 00:07:01.589 ================================ 00:07:01.589 Supported: No 00:07:01.589 00:07:01.589 Admin Command Set Attributes 00:07:01.589 ============================ 00:07:01.590 Security Send/Receive: Not Supported 00:07:01.590 Format NVM: Supported 00:07:01.590 Firmware Activate/Download: Not Supported 00:07:01.590 Namespace Management: Supported 00:07:01.590 Device Self-Test: Not Supported 00:07:01.590 Directives: Supported 00:07:01.590 NVMe-MI: Not Supported 00:07:01.590 Virtualization Management: Not Supported 00:07:01.590 Doorbell Buffer Config: Supported 00:07:01.590 Get LBA Status Capability: Not Supported 00:07:01.590 Command & Feature Lockdown Capability: Not Supported 00:07:01.590 Abort Command Limit: 4 00:07:01.590 Async Event Request Limit: 4 00:07:01.590 Number of Firmware Slots: N/A 00:07:01.590 Firmware Slot 1 Read-Only: N/A 00:07:01.590 Firmware Activation Without Reset: N/A 00:07:01.590 Multiple Update Detection Support: N/A 00:07:01.590 Firmware Update Granularity: No Information Provided 00:07:01.590 Per-Namespace SMART Log: Yes 00:07:01.590 Asymmetric Namespace Access Log Page: Not Supported 00:07:01.590 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:01.590 Command Effects Log Page: Supported 00:07:01.590 Get Log Page Extended Data: Supported 00:07:01.590 Telemetry Log Pages: Not Supported 00:07:01.590 Persistent Event Log Pages: Not Supported 00:07:01.590 Supported Log Pages Log Page: May Support 00:07:01.590 Commands Supported & Effects Log Page: Not Supported 00:07:01.590 Feature Identifiers & Effects Log Page:May Support 00:07:01.590 NVMe-MI Commands & Effects Log Page: May Support 00:07:01.590 Data Area 4 for Telemetry Log: Not Supported 00:07:01.590 Error Log Page Entries Supported: 1 00:07:01.590 Keep Alive: Not Supported 00:07:01.590 00:07:01.590 NVM Command Set Attributes 00:07:01.590 ========================== 00:07:01.590 Submission Queue Entry Size 00:07:01.590 Max: 64 00:07:01.590 Min: 64 00:07:01.590 Completion Queue Entry Size 00:07:01.590 Max: 16 00:07:01.590 Min: 16 00:07:01.590 Number of Namespaces: 256 00:07:01.590 Compare Command: Supported 00:07:01.590 Write Uncorrectable Command: Not Supported 00:07:01.590 Dataset Management Command: Supported 00:07:01.590 Write Zeroes Command: Supported 00:07:01.590 Set Features Save Field: Supported 00:07:01.590 Reservations: Not Supported 00:07:01.590 Timestamp: Supported 00:07:01.590 Copy: Supported 00:07:01.590 Volatile Write Cache: Present 00:07:01.590 Atomic Write Unit (Normal): 1 00:07:01.590 Atomic Write Unit (PFail): 1 00:07:01.590 Atomic Compare & Write Unit: 1 00:07:01.590 Fused Compare & Write: Not Supported 00:07:01.590 Scatter-Gather List 00:07:01.590 SGL Command Set: Supported 00:07:01.590 SGL Keyed: Not Supported 00:07:01.590 SGL Bit Bucket Descriptor: Not Supported 00:07:01.590 SGL Metadata Pointer: Not Supported 00:07:01.590 Oversized SGL: Not Supported 00:07:01.590 SGL Metadata Address: Not Supported 00:07:01.590 SGL Offset: Not Supported 00:07:01.590 Transport SGL Data Block: Not Supported 00:07:01.590 Replay Protected Memory Block: Not Supported 00:07:01.590 00:07:01.590 Firmware Slot Information 00:07:01.590 ========================= 00:07:01.590 Active slot: 1 00:07:01.590 Slot 1 Firmware Revision: 1.0 00:07:01.590 00:07:01.590 00:07:01.590 Commands Supported and Effects 00:07:01.590 ============================== 00:07:01.590 Admin Commands 00:07:01.590 -------------- 00:07:01.590 Delete I/O Submission Queue (00h): Supported 00:07:01.590 Create I/O Submission Queue (01h): Supported 00:07:01.590 Get Log Page (02h): Supported 00:07:01.590 Delete I/O Completion Queue (04h): Supported 00:07:01.590 Create I/O Completion Queue (05h): Supported 00:07:01.590 Identify (06h): Supported 00:07:01.590 Abort (08h): Supported 00:07:01.590 Set Features (09h): Supported 00:07:01.590 Get Features (0Ah): Supported 00:07:01.590 Asynchronous Event Request (0Ch): Supported 00:07:01.590 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:01.590 Directive Send (19h): Supported 00:07:01.590 Directive Receive (1Ah): Supported 00:07:01.590 Virtualization Management (1Ch): Supported 00:07:01.590 Doorbell Buffer Config (7Ch): Supported 00:07:01.590 Format NVM (80h): Supported LBA-Change 00:07:01.590 I/O Commands 00:07:01.590 ------------ 00:07:01.590 Flush (00h): Supported LBA-Change 00:07:01.590 Write (01h): Supported LBA-Change 00:07:01.590 Read (02h): Supported 00:07:01.590 Compare (05h): Supported 00:07:01.590 Write Zeroes (08h): Supported LBA-Change 00:07:01.590 Dataset Management (09h): Supported LBA-Change 00:07:01.590 Unknown (0Ch): Supported 00:07:01.590 Unknown (12h): Supported 00:07:01.590 Copy (19h): Supported LBA-Change 00:07:01.590 Unknown (1Dh): Supported LBA-Change 00:07:01.590 00:07:01.590 Error Log 00:07:01.590 ========= 00:07:01.590 00:07:01.590 Arbitration 00:07:01.590 =========== 00:07:01.590 Arbitration Burst: no limit 00:07:01.590 00:07:01.590 Power Management 00:07:01.590 ================ 00:07:01.590 Number of Power States: 1 00:07:01.590 Current Power State: Power State #0 00:07:01.590 Power State #0: 00:07:01.590 Max Power: 25.00 W 00:07:01.590 Non-Operational State: Operational 00:07:01.590 Entry Latency: 16 microseconds 00:07:01.590 Exit Latency: 4 microseconds 00:07:01.590 Relative Read Throughput: 0 00:07:01.590 Relative Read Latency: 0 00:07:01.590 Relative Write Throughput: 0 00:07:01.590 Relative Write Latency: 0 00:07:01.590 Idle Power: Not Reported 00:07:01.590 Active Power: Not Reported 00:07:01.590 Non-Operational Permissive Mode: Not Supported 00:07:01.590 00:07:01.590 Health Information 00:07:01.590 ================== 00:07:01.590 Critical Warnings: 00:07:01.590 Available Spare Space: OK 00:07:01.590 Temperature: OK 00:07:01.590 Device Reliability: OK 00:07:01.590 Read Only: No 00:07:01.590 Volatile Memory Backup: OK 00:07:01.590 Current Temperature: 323 Kelvin (50 Celsius) 00:07:01.590 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:01.590 Available Spare: 0% 00:07:01.590 Available Spare Threshold: 0% 00:07:01.590 Life Percentage Used: 0% 00:07:01.590 Data Units Read: 859 00:07:01.590 Data Units Written: 788 00:07:01.590 Host Read Commands: 41888 00:07:01.590 Host Write Commands: 41311 00:07:01.590 Controller Busy Time: 0 minutes 00:07:01.590 Power Cycles: 0 00:07:01.590 Power On Hours: 0 hours 00:07:01.590 Unsafe Shutdowns: 0 00:07:01.590 Unrecoverable Media Errors: 0 00:07:01.590 Lifetime Error Log Entries: 0 00:07:01.590 Warning Temperature Time: 0 minutes 00:07:01.590 Critical Temperature Time: 0 minutes 00:07:01.590 00:07:01.590 Number of Queues 00:07:01.590 ================ 00:07:01.590 Number of I/O Submission Queues: 64 00:07:01.590 Number of I/O Completion Queues: 64 00:07:01.590 00:07:01.590 ZNS Specific Controller Data 00:07:01.590 ============================ 00:07:01.590 Zone Append Size Limit: 0 00:07:01.590 00:07:01.590 00:07:01.590 Active Namespaces 00:07:01.590 ================= 00:07:01.590 Namespace ID:1 00:07:01.590 Error Recovery Timeout: Unlimited 00:07:01.590 Command Set Identifier: NVM (00h) 00:07:01.590 Deallocate: Supported 00:07:01.590 Deallocated/Unwritten Error: Supported 00:07:01.590 Deallocated Read Value: All 0x00 00:07:01.590 Deallocate in Write Zeroes: Not Supported 00:07:01.590 Deallocated Guard Field: 0xFFFF 00:07:01.590 Flush: Supported 00:07:01.590 Reservation: Not Supported 00:07:01.590 Namespace Sharing Capabilities: Multiple Controllers 00:07:01.590 Size (in LBAs): 262144 (1GiB) 00:07:01.590 Capacity (in LBAs): 262144 (1GiB) 00:07:01.590 Utilization (in LBAs): 262144 (1GiB) 00:07:01.590 Thin Provisioning: Not Supported 00:07:01.590 Per-NS Atomic Units: No 00:07:01.590 Maximum Single Source Range Length: 128 00:07:01.590 Maximum Copy Length: 128 00:07:01.590 Maximum Source Range Count: 128 00:07:01.590 NGUID/EUI64 Never Reused: No 00:07:01.590 Namespace Write Protected: No 00:07:01.590 Endurance group ID: 1 00:07:01.590 Number of LBA Formats: 8 00:07:01.590 Current LBA Format: LBA Format #04 00:07:01.590 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:01.590 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:01.590 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:01.590 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:01.590 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:01.590 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:01.590 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:01.590 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:01.590 00:07:01.590 Get Feature FDP: 00:07:01.590 ================ 00:07:01.590 Enabled: Yes 00:07:01.590 FDP configuration index: 0 00:07:01.590 00:07:01.590 FDP configurations log page 00:07:01.590 =========================== 00:07:01.590 Number of FDP configurations: 1 00:07:01.590 Version: 0 00:07:01.590 Size: 112 00:07:01.590 FDP Configuration Descriptor: 0 00:07:01.590 Descriptor Size: 96 00:07:01.590 Reclaim Group Identifier format: 2 00:07:01.590 FDP Volatile Write Cache: Not Present 00:07:01.590 FDP Configuration: Valid 00:07:01.590 Vendor Specific Size: 0 00:07:01.590 Number of Reclaim Groups: 2 00:07:01.590 Number of Recalim Unit Handles: 8 00:07:01.591 Max Placement Identifiers: 128 00:07:01.591 Number of Namespaces Suppprted: 256 00:07:01.591 Reclaim unit Nominal Size: 6000000 bytes 00:07:01.591 Estimated Reclaim Unit Time Limit: Not Reported 00:07:01.591 RUH Desc #000: RUH Type: Initially Isolated 00:07:01.591 RUH Desc #001: RUH Type: Initially Isolated 00:07:01.591 RUH Desc #002: RUH Type: Initially Isolated 00:07:01.591 RUH Desc #003: RUH Type: Initially Isolated 00:07:01.591 RUH Desc #004: RUH Type: Initially Isolated 00:07:01.591 RUH Desc #005: RUH Type: Initially Isolated 00:07:01.591 RUH Desc #006: RUH Type: Initially Isolated 00:07:01.591 RUH Desc #007: RUH Type: Initially Isolated 00:07:01.591 00:07:01.591 FDP reclaim unit handle usage log page 00:07:01.591 ==================================[2024-10-08 10:38:21.999393] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 76029 terminated unexpected 00:07:01.591 ==== 00:07:01.591 Number of Reclaim Unit Handles: 8 00:07:01.591 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:01.591 RUH Usage Desc #001: RUH Attributes: Unused 00:07:01.591 RUH Usage Desc #002: RUH Attributes: Unused 00:07:01.591 RUH Usage Desc #003: RUH Attributes: Unused 00:07:01.591 RUH Usage Desc #004: RUH Attributes: Unused 00:07:01.591 RUH Usage Desc #005: RUH Attributes: Unused 00:07:01.591 RUH Usage Desc #006: RUH Attributes: Unused 00:07:01.591 RUH Usage Desc #007: RUH Attributes: Unused 00:07:01.591 00:07:01.591 FDP statistics log page 00:07:01.591 ======================= 00:07:01.591 Host bytes with metadata written: 504668160 00:07:01.591 Media bytes with metadata written: 504725504 00:07:01.591 Media bytes erased: 0 00:07:01.591 00:07:01.591 FDP events log page 00:07:01.591 =================== 00:07:01.591 Number of FDP events: 0 00:07:01.591 00:07:01.591 NVM Specific Namespace Data 00:07:01.591 =========================== 00:07:01.591 Logical Block Storage Tag Mask: 0 00:07:01.591 Protection Information Capabilities: 00:07:01.591 16b Guard Protection Information Storage Tag Support: No 00:07:01.591 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:01.591 Storage Tag Check Read Support: No 00:07:01.591 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.591 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.591 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.591 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.591 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.591 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.591 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.591 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.591 ===================================================== 00:07:01.591 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:01.591 ===================================================== 00:07:01.591 Controller Capabilities/Features 00:07:01.591 ================================ 00:07:01.591 Vendor ID: 1b36 00:07:01.591 Subsystem Vendor ID: 1af4 00:07:01.591 Serial Number: 12340 00:07:01.591 Model Number: QEMU NVMe Ctrl 00:07:01.591 Firmware Version: 8.0.0 00:07:01.591 Recommended Arb Burst: 6 00:07:01.591 IEEE OUI Identifier: 00 54 52 00:07:01.591 Multi-path I/O 00:07:01.591 May have multiple subsystem ports: No 00:07:01.591 May have multiple controllers: No 00:07:01.591 Associated with SR-IOV VF: No 00:07:01.591 Max Data Transfer Size: 524288 00:07:01.591 Max Number of Namespaces: 256 00:07:01.591 Max Number of I/O Queues: 64 00:07:01.591 NVMe Specification Version (VS): 1.4 00:07:01.591 NVMe Specification Version (Identify): 1.4 00:07:01.591 Maximum Queue Entries: 2048 00:07:01.591 Contiguous Queues Required: Yes 00:07:01.591 Arbitration Mechanisms Supported 00:07:01.591 Weighted Round Robin: Not Supported 00:07:01.591 Vendor Specific: Not Supported 00:07:01.591 Reset Timeout: 7500 ms 00:07:01.591 Doorbell Stride: 4 bytes 00:07:01.591 NVM Subsystem Reset: Not Supported 00:07:01.591 Command Sets Supported 00:07:01.591 NVM Command Set: Supported 00:07:01.591 Boot Partition: Not Supported 00:07:01.591 Memory Page Size Minimum: 4096 bytes 00:07:01.591 Memory Page Size Maximum: 65536 bytes 00:07:01.591 Persistent Memory Region: Not Supported 00:07:01.591 Optional Asynchronous Events Supported 00:07:01.591 Namespace Attribute Notices: Supported 00:07:01.591 Firmware Activation Notices: Not Supported 00:07:01.591 ANA Change Notices: Not Supported 00:07:01.591 PLE Aggregate Log Change Notices: Not Supported 00:07:01.591 LBA Status Info Alert Notices: Not Supported 00:07:01.591 EGE Aggregate Log Change Notices: Not Supported 00:07:01.591 Normal NVM Subsystem Shutdown event: Not Supported 00:07:01.591 Zone Descriptor Change Notices: Not Supported 00:07:01.591 Discovery Log Change Notices: Not Supported 00:07:01.591 Controller Attributes 00:07:01.591 128-bit Host Identifier: Not Supported 00:07:01.591 Non-Operational Permissive Mode: Not Supported 00:07:01.591 NVM Sets: Not Supported 00:07:01.591 Read Recovery Levels: Not Supported 00:07:01.591 Endurance Groups: Not Supported 00:07:01.591 Predictable Latency Mode: Not Supported 00:07:01.591 Traffic Based Keep ALive: Not Supported 00:07:01.591 Namespace Granularity: Not Supported 00:07:01.591 SQ Associations: Not Supported 00:07:01.591 UUID List: Not Supported 00:07:01.591 Multi-Domain Subsystem: Not Supported 00:07:01.591 Fixed Capacity Management: Not Supported 00:07:01.591 Variable Capacity Management: Not Supported 00:07:01.591 Delete Endurance Group: Not Supported 00:07:01.591 Delete NVM Set: Not Supported 00:07:01.591 Extended LBA Formats Supported: Supported 00:07:01.591 Flexible Data Placement Supported: Not Supported 00:07:01.591 00:07:01.591 Controller Memory Buffer Support 00:07:01.591 ================================ 00:07:01.591 Supported: No 00:07:01.591 00:07:01.591 Persistent Memory Region Support 00:07:01.591 ================================ 00:07:01.591 Supported: No 00:07:01.591 00:07:01.591 Admin Command Set Attributes 00:07:01.591 ============================ 00:07:01.591 Security Send/Receive: Not Supported 00:07:01.591 Format NVM: Supported 00:07:01.591 Firmware Activate/Download: Not Supported 00:07:01.591 Namespace Management: Supported 00:07:01.591 Device Self-Test: Not Supported 00:07:01.591 Directives: Supported 00:07:01.591 NVMe-MI: Not Supported 00:07:01.591 Virtualization Management: Not Supported 00:07:01.591 Doorbell Buffer Config: Supported 00:07:01.591 Get LBA Status Capability: Not Supported 00:07:01.591 Command & Feature Lockdown Capability: Not Supported 00:07:01.591 Abort Command Limit: 4 00:07:01.591 Async Event Request Limit: 4 00:07:01.591 Number of Firmware Slots: N/A 00:07:01.591 Firmware Slot 1 Read-Only: N/A 00:07:01.591 Firmware Activation Without Reset: N/A 00:07:01.591 Multiple Update Detection Support: N/A 00:07:01.591 Firmware Update Granularity: No Information Provided 00:07:01.591 Per-Namespace SMART Log: Yes 00:07:01.591 Asymmetric Namespace Access Log Page: Not Supported 00:07:01.591 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:01.591 Command Effects Log Page: Supported 00:07:01.591 Get Log Page Extended Data: Supported 00:07:01.591 Telemetry Log Pages: Not Supported 00:07:01.591 Persistent Event Log Pages: Not Supported 00:07:01.591 Supported Log Pages Log Page: May Support 00:07:01.591 Commands Supported & Effects Log Page: Not Supported 00:07:01.591 Feature Identifiers & Effects Log Page:May Support 00:07:01.591 NVMe-MI Commands & Effects Log Page: May Support 00:07:01.591 Data Area 4 for Telemetry Log: Not Supported 00:07:01.591 Error Log Page Entries Supported: 1 00:07:01.591 Keep Alive: Not Supported 00:07:01.591 00:07:01.591 NVM Command Set Attributes 00:07:01.591 ========================== 00:07:01.591 Submission Queue Entry Size 00:07:01.591 Max: 64 00:07:01.591 Min: 64 00:07:01.591 Completion Queue Entry Size 00:07:01.591 Max: 16 00:07:01.591 Min: 16 00:07:01.591 Number of Namespaces: 256 00:07:01.591 Compare Command: Supported 00:07:01.591 Write Uncorrectable Command: Not Supported 00:07:01.591 Dataset Management Command: Supported 00:07:01.591 Write Zeroes Command: Supported 00:07:01.591 Set Features Save Field: Supported 00:07:01.591 Reservations: Not Supported 00:07:01.591 Timestamp: Supported 00:07:01.591 Copy: Supported 00:07:01.591 Volatile Write Cache: Present 00:07:01.591 Atomic Write Unit (Normal): 1 00:07:01.591 Atomic Write Unit (PFail): 1 00:07:01.591 Atomic Compare & Write Unit: 1 00:07:01.591 Fused Compare & Write: Not Supported 00:07:01.591 Scatter-Gather List 00:07:01.591 SGL Command Set: Supported 00:07:01.591 SGL Keyed: Not Supported 00:07:01.591 SGL Bit Bucket Descriptor: Not Supported 00:07:01.591 SGL Metadata Pointer: Not Supported 00:07:01.591 Oversized SGL: Not Supported 00:07:01.591 SGL Metadata Address: Not Supported 00:07:01.591 SGL Offset: Not Supported 00:07:01.591 Transport SGL Data Block: Not Supported 00:07:01.591 Replay Protected Memory Block: Not Supported 00:07:01.591 00:07:01.591 Firmware Slot Information 00:07:01.591 ========================= 00:07:01.591 Active slot: 1 00:07:01.591 Slot 1 Firmware Revision: 1.0 00:07:01.591 00:07:01.591 00:07:01.591 Commands Supported and Effects 00:07:01.591 ============================== 00:07:01.591 Admin Commands 00:07:01.591 -------------- 00:07:01.591 Delete I/O Submission Queue (00h): Supported 00:07:01.591 Create I/O Submission Queue (01h): Supported 00:07:01.591 Get Log Page (02h): Supported 00:07:01.591 Delete I/O Completion Queue (04h): Supported 00:07:01.591 Create I/O Completion Queue (05h): Supported 00:07:01.592 Identify (06h): Supported 00:07:01.592 Abort (08h): Supported 00:07:01.592 Set Features (09h): Supported 00:07:01.592 Get Features (0Ah): Supported 00:07:01.592 Asynchronous Event Request (0Ch): Supported 00:07:01.592 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:01.592 Directive Send (19h): Supported 00:07:01.592 Directive Receive (1Ah): Supported 00:07:01.592 Virtualization Management (1Ch): Supported 00:07:01.592 Doorbell Buffer Config (7Ch): Supported 00:07:01.592 Format NVM (80h): Supported LBA-Change 00:07:01.592 I/O Commands 00:07:01.592 ------------ 00:07:01.592 Flush (00h): Supported LBA-Change 00:07:01.592 Write (01h): Supported LBA-Change 00:07:01.592 Read (02h): Supported 00:07:01.592 Compare (05h): Supported 00:07:01.592 Write Zeroes (08h): Supported LBA-Change 00:07:01.592 Dataset Management (09h): Supported LBA-Change 00:07:01.592 Unknown (0Ch): Supported 00:07:01.592 Unknown (12h): Supported 00:07:01.592 Copy (19h): Supported LBA-Change 00:07:01.592 Unknown (1Dh): Supported LBA-Change 00:07:01.592 00:07:01.592 Error Log 00:07:01.592 ========= 00:07:01.592 00:07:01.592 Arbitration 00:07:01.592 =========== 00:07:01.592 Arbitration Burst: no limit 00:07:01.592 00:07:01.592 Power Management 00:07:01.592 ================ 00:07:01.592 Number of Power States: 1 00:07:01.592 Current Power State: Power State #0 00:07:01.592 Power State #0: 00:07:01.592 Max Power: 25.00 W 00:07:01.592 Non-Operational State: Operational 00:07:01.592 Entry Latency: 16 microseconds 00:07:01.592 Exit Latency: 4 microseconds 00:07:01.592 Relative Read Throughput: 0 00:07:01.592 Relative Read Latency: 0 00:07:01.592 Relative Write Throughput: 0 00:07:01.592 Relative Write Latency: 0 00:07:01.592 Idle Power: Not Reported 00:07:01.592 Active Power: Not Reported 00:07:01.592 Non-Operational Permissive Mode: Not Supported 00:07:01.592 00:07:01.592 Health Information 00:07:01.592 ================== 00:07:01.592 Critical Warnings: 00:07:01.592 Available Spare Space: OK 00:07:01.592 Temperature: OK 00:07:01.592 Device Reliability: OK 00:07:01.592 Read Only: No 00:07:01.592 Volatile Memory Backup: OK 00:07:01.592 Current Temperature: 323 Kelvin (50 Celsius) 00:07:01.592 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:01.592 Available Spare: 0% 00:07:01.592 Available Spare Threshold: 0% 00:07:01.592 Life Percentage Used: 0% 00:07:01.592 Data Units Read: 708 00:07:01.592 Data Units Written: 636 00:07:01.592 Host Read Commands: 40288 00:07:01.592 Host Write Commands: 40074 00:07:01.592 Controller Busy Time: 0 minutes 00:07:01.592 Power Cycles: 0 00:07:01.592 Power On Hours: 0 hours 00:07:01.592 Unsafe Shutdowns: 0 00:07:01.592 Unrecoverable Media Errors: 0 00:07:01.592 Lifetime Error Log Entries: 0 00:07:01.592 Warning Temperature Time: 0 minutes 00:07:01.592 Critical Temperature Time: 0 minutes 00:07:01.592 00:07:01.592 Number of Queues 00:07:01.592 ================ 00:07:01.592 Number of I/O Submission Queues: 64 00:07:01.592 Number of I/O Completion Queues: 64 00:07:01.592 00:07:01.592 ZNS Specific Controller Data 00:07:01.592 ============================ 00:07:01.592 Zone Append Size Limit: 0 00:07:01.592 00:07:01.592 00:07:01.592 Active Namespaces 00:07:01.592 ================= 00:07:01.592 Namespace ID:1 00:07:01.592 Error Recovery Timeout: Unlimited 00:07:01.592 Command Set Identifier: NVM (00h) 00:07:01.592 Deallocate: Supported 00:07:01.592 Deallocated/Unwritten Error: Supported 00:07:01.592 Deallocated Read Value: All 0x00 00:07:01.592 Deallocate in Write Zeroes: Not Supported 00:07:01.592 Deallocated Guard Field: 0xFFFF 00:07:01.592 Flush: Supported 00:07:01.592 Reservation: Not Supported 00:07:01.592 Metadata Transferred as: Separate Metadata Buffer 00:07:01.592 Namespace Sharing Capabilities: Private 00:07:01.592 Size (in LBAs): 1548666 (5GiB) 00:07:01.592 Capacity (in LBAs): 1548666 (5GiB) 00:07:01.592 Utilization (in LBAs): 1548666 (5GiB) 00:07:01.592 Thin Provisioning: Not Supported 00:07:01.592 Per-NS Atomic Units: No 00:07:01.592 Maximum Single Source Range Length: 128 00:07:01.592 Maximum Copy Length: 128 00:07:01.592 Maximum Source Range Count: 128 00:07:01.592 NGUID/EUI64 Never Reused: No 00:07:01.592 Namespace Write Protected: No 00:07:01.592 Number of LBA Formats: 8 00:07:01.592 Current LBA Format: [2024-10-08 10:38:22.001258] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 76029 terminated unexpected 00:07:01.592 LBA Format #07 00:07:01.592 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:01.592 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:01.592 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:01.592 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:01.592 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:01.592 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:01.592 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:01.592 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:01.592 00:07:01.592 NVM Specific Namespace Data 00:07:01.592 =========================== 00:07:01.592 Logical Block Storage Tag Mask: 0 00:07:01.592 Protection Information Capabilities: 00:07:01.592 16b Guard Protection Information Storage Tag Support: No 00:07:01.592 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:01.592 Storage Tag Check Read Support: No 00:07:01.592 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.592 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.592 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.592 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.592 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.592 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.592 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.592 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.592 ===================================================== 00:07:01.592 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:01.592 ===================================================== 00:07:01.592 Controller Capabilities/Features 00:07:01.592 ================================ 00:07:01.592 Vendor ID: 1b36 00:07:01.592 Subsystem Vendor ID: 1af4 00:07:01.592 Serial Number: 12341 00:07:01.592 Model Number: QEMU NVMe Ctrl 00:07:01.592 Firmware Version: 8.0.0 00:07:01.592 Recommended Arb Burst: 6 00:07:01.592 IEEE OUI Identifier: 00 54 52 00:07:01.592 Multi-path I/O 00:07:01.592 May have multiple subsystem ports: No 00:07:01.592 May have multiple controllers: No 00:07:01.592 Associated with SR-IOV VF: No 00:07:01.592 Max Data Transfer Size: 524288 00:07:01.592 Max Number of Namespaces: 256 00:07:01.592 Max Number of I/O Queues: 64 00:07:01.592 NVMe Specification Version (VS): 1.4 00:07:01.592 NVMe Specification Version (Identify): 1.4 00:07:01.592 Maximum Queue Entries: 2048 00:07:01.592 Contiguous Queues Required: Yes 00:07:01.592 Arbitration Mechanisms Supported 00:07:01.592 Weighted Round Robin: Not Supported 00:07:01.592 Vendor Specific: Not Supported 00:07:01.592 Reset Timeout: 7500 ms 00:07:01.592 Doorbell Stride: 4 bytes 00:07:01.592 NVM Subsystem Reset: Not Supported 00:07:01.592 Command Sets Supported 00:07:01.592 NVM Command Set: Supported 00:07:01.592 Boot Partition: Not Supported 00:07:01.592 Memory Page Size Minimum: 4096 bytes 00:07:01.592 Memory Page Size Maximum: 65536 bytes 00:07:01.592 Persistent Memory Region: Not Supported 00:07:01.592 Optional Asynchronous Events Supported 00:07:01.592 Namespace Attribute Notices: Supported 00:07:01.592 Firmware Activation Notices: Not Supported 00:07:01.592 ANA Change Notices: Not Supported 00:07:01.592 PLE Aggregate Log Change Notices: Not Supported 00:07:01.592 LBA Status Info Alert Notices: Not Supported 00:07:01.592 EGE Aggregate Log Change Notices: Not Supported 00:07:01.592 Normal NVM Subsystem Shutdown event: Not Supported 00:07:01.592 Zone Descriptor Change Notices: Not Supported 00:07:01.592 Discovery Log Change Notices: Not Supported 00:07:01.592 Controller Attributes 00:07:01.592 128-bit Host Identifier: Not Supported 00:07:01.592 Non-Operational Permissive Mode: Not Supported 00:07:01.592 NVM Sets: Not Supported 00:07:01.592 Read Recovery Levels: Not Supported 00:07:01.592 Endurance Groups: Not Supported 00:07:01.592 Predictable Latency Mode: Not Supported 00:07:01.592 Traffic Based Keep ALive: Not Supported 00:07:01.592 Namespace Granularity: Not Supported 00:07:01.592 SQ Associations: Not Supported 00:07:01.592 UUID List: Not Supported 00:07:01.592 Multi-Domain Subsystem: Not Supported 00:07:01.592 Fixed Capacity Management: Not Supported 00:07:01.592 Variable Capacity Management: Not Supported 00:07:01.592 Delete Endurance Group: Not Supported 00:07:01.592 Delete NVM Set: Not Supported 00:07:01.592 Extended LBA Formats Supported: Supported 00:07:01.592 Flexible Data Placement Supported: Not Supported 00:07:01.592 00:07:01.592 Controller Memory Buffer Support 00:07:01.592 ================================ 00:07:01.592 Supported: No 00:07:01.592 00:07:01.593 Persistent Memory Region Support 00:07:01.593 ================================ 00:07:01.593 Supported: No 00:07:01.593 00:07:01.593 Admin Command Set Attributes 00:07:01.593 ============================ 00:07:01.593 Security Send/Receive: Not Supported 00:07:01.593 Format NVM: Supported 00:07:01.593 Firmware Activate/Download: Not Supported 00:07:01.593 Namespace Management: Supported 00:07:01.593 Device Self-Test: Not Supported 00:07:01.593 Directives: Supported 00:07:01.593 NVMe-MI: Not Supported 00:07:01.593 Virtualization Management: Not Supported 00:07:01.593 Doorbell Buffer Config: Supported 00:07:01.593 Get LBA Status Capability: Not Supported 00:07:01.593 Command & Feature Lockdown Capability: Not Supported 00:07:01.593 Abort Command Limit: 4 00:07:01.593 Async Event Request Limit: 4 00:07:01.593 Number of Firmware Slots: N/A 00:07:01.593 Firmware Slot 1 Read-Only: N/A 00:07:01.593 Firmware Activation Without Reset: N/A 00:07:01.593 Multiple Update Detection Support: N/A 00:07:01.593 Firmware Update Granularity: No Information Provided 00:07:01.593 Per-Namespace SMART Log: Yes 00:07:01.593 Asymmetric Namespace Access Log Page: Not Supported 00:07:01.593 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:01.593 Command Effects Log Page: Supported 00:07:01.593 Get Log Page Extended Data: Supported 00:07:01.593 Telemetry Log Pages: Not Supported 00:07:01.593 Persistent Event Log Pages: Not Supported 00:07:01.593 Supported Log Pages Log Page: May Support 00:07:01.593 Commands Supported & Effects Log Page: Not Supported 00:07:01.593 Feature Identifiers & Effects Log Page:May Support 00:07:01.593 NVMe-MI Commands & Effects Log Page: May Support 00:07:01.593 Data Area 4 for Telemetry Log: Not Supported 00:07:01.593 Error Log Page Entries Supported: 1 00:07:01.593 Keep Alive: Not Supported 00:07:01.593 00:07:01.593 NVM Command Set Attributes 00:07:01.593 ========================== 00:07:01.593 Submission Queue Entry Size 00:07:01.593 Max: 64 00:07:01.593 Min: 64 00:07:01.593 Completion Queue Entry Size 00:07:01.593 Max: 16 00:07:01.593 Min: 16 00:07:01.593 Number of Namespaces: 256 00:07:01.593 Compare Command: Supported 00:07:01.593 Write Uncorrectable Command: Not Supported 00:07:01.593 Dataset Management Command: Supported 00:07:01.593 Write Zeroes Command: Supported 00:07:01.593 Set Features Save Field: Supported 00:07:01.593 Reservations: Not Supported 00:07:01.593 Timestamp: Supported 00:07:01.593 Copy: Supported 00:07:01.593 Volatile Write Cache: Present 00:07:01.593 Atomic Write Unit (Normal): 1 00:07:01.593 Atomic Write Unit (PFail): 1 00:07:01.593 Atomic Compare & Write Unit: 1 00:07:01.593 Fused Compare & Write: Not Supported 00:07:01.593 Scatter-Gather List 00:07:01.593 SGL Command Set: Supported 00:07:01.593 SGL Keyed: Not Supported 00:07:01.593 SGL Bit Bucket Descriptor: Not Supported 00:07:01.593 SGL Metadata Pointer: Not Supported 00:07:01.593 Oversized SGL: Not Supported 00:07:01.593 SGL Metadata Address: Not Supported 00:07:01.593 SGL Offset: Not Supported 00:07:01.593 Transport SGL Data Block: Not Supported 00:07:01.593 Replay Protected Memory Block: Not Supported 00:07:01.593 00:07:01.593 Firmware Slot Information 00:07:01.593 ========================= 00:07:01.593 Active slot: 1 00:07:01.593 Slot 1 Firmware Revision: 1.0 00:07:01.593 00:07:01.593 00:07:01.593 Commands Supported and Effects 00:07:01.593 ============================== 00:07:01.593 Admin Commands 00:07:01.593 -------------- 00:07:01.593 Delete I/O Submission Queue (00h): Supported 00:07:01.593 Create I/O Submission Queue (01h): Supported 00:07:01.593 Get Log Page (02h): Supported 00:07:01.593 Delete I/O Completion Queue (04h): Supported 00:07:01.593 Create I/O Completion Queue (05h): Supported 00:07:01.593 Identify (06h): Supported 00:07:01.593 Abort (08h): Supported 00:07:01.593 Set Features (09h): Supported 00:07:01.593 Get Features (0Ah): Supported 00:07:01.593 Asynchronous Event Request (0Ch): Supported 00:07:01.593 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:01.593 Directive Send (19h): Supported 00:07:01.593 Directive Receive (1Ah): Supported 00:07:01.593 Virtualization Management (1Ch): Supported 00:07:01.593 Doorbell Buffer Config (7Ch): Supported 00:07:01.593 Format NVM (80h): Supported LBA-Change 00:07:01.593 I/O Commands 00:07:01.593 ------------ 00:07:01.593 Flush (00h): Supported LBA-Change 00:07:01.593 Write (01h): Supported LBA-Change 00:07:01.593 Read (02h): Supported 00:07:01.593 Compare (05h): Supported 00:07:01.593 Write Zeroes (08h): Supported LBA-Change 00:07:01.593 Dataset Management (09h): Supported LBA-Change 00:07:01.593 Unknown (0Ch): Supported 00:07:01.593 Unknown (12h): Supported 00:07:01.593 Copy (19h): Supported LBA-Change 00:07:01.593 Unknown (1Dh): Supported LBA-Change 00:07:01.593 00:07:01.593 Error Log 00:07:01.593 ========= 00:07:01.593 00:07:01.593 Arbitration 00:07:01.593 =========== 00:07:01.593 Arbitration Burst: no limit 00:07:01.593 00:07:01.593 Power Management 00:07:01.593 ================ 00:07:01.593 Number of Power States: 1 00:07:01.593 Current Power State: Power State #0 00:07:01.593 Power State #0: 00:07:01.593 Max Power: 25.00 W 00:07:01.593 Non-Operational State: Operational 00:07:01.593 Entry Latency: 16 microseconds 00:07:01.593 Exit Latency: 4 microseconds 00:07:01.593 Relative Read Throughput: 0 00:07:01.593 Relative Read Latency: 0 00:07:01.593 Relative Write Throughput: 0 00:07:01.593 Relative Write Latency: 0 00:07:01.593 Idle Power: Not Reported 00:07:01.593 Active Power: Not Reported 00:07:01.593 Non-Operational Permissive Mode: Not Supported 00:07:01.593 00:07:01.593 Health Information 00:07:01.593 ================== 00:07:01.593 Critical Warnings: 00:07:01.593 Available Spare Space: OK 00:07:01.593 Temperature: OK 00:07:01.593 Device Reliability: OK 00:07:01.593 Read Only: No 00:07:01.593 Volatile Memory Backup: OK 00:07:01.593 Current Temperature: 323 Kelvin (50 Celsius) 00:07:01.593 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:01.593 Available Spare: 0% 00:07:01.593 Available Spare Threshold: 0% 00:07:01.593 Life Percentage Used: 0% 00:07:01.593 Data Units Read: 1131 00:07:01.593 Data Units Written: 997 00:07:01.593 Host Read Commands: 60979 00:07:01.593 Host Write Commands: 59756 00:07:01.593 Controller Busy Time: 0 minutes 00:07:01.593 Power Cycles: 0 00:07:01.593 Power On Hours: 0 hours 00:07:01.593 Unsafe Shutdowns: 0 00:07:01.593 Unrecoverable Media Errors: 0 00:07:01.593 Lifetime Error Log Entries: 0 00:07:01.593 Warning Temperature Time: 0 minutes 00:07:01.593 Critical Temperature Time: 0 minutes 00:07:01.593 00:07:01.593 Number of Queues 00:07:01.593 ================ 00:07:01.593 Number of I/O Submission Queues: 64 00:07:01.593 Number of I/O Completion Queues: 64 00:07:01.593 00:07:01.593 ZNS Specific Controller Data 00:07:01.593 ============================ 00:07:01.593 Zone Append Size Limit: 0 00:07:01.593 00:07:01.593 00:07:01.593 Active Namespaces 00:07:01.593 ================= 00:07:01.593 Namespace ID:1 00:07:01.593 Error Recovery Timeout: Unlimited 00:07:01.593 Command Set Identifier: NVM (00h) 00:07:01.593 Deallocate: Supported 00:07:01.593 Deallocated/Unwritten Error: Supported 00:07:01.593 Deallocated Read Value: All 0x00 00:07:01.593 Deallocate in Write Zeroes: Not Supported 00:07:01.593 Deallocated Guard Field: 0xFFFF 00:07:01.593 Flush: Supported 00:07:01.593 Reservation: Not Supported 00:07:01.593 Namespace Sharing Capabilities: Private 00:07:01.593 Size (in LBAs): 1310720 (5GiB) 00:07:01.593 Capacity (in LBAs): 1310720 (5GiB) 00:07:01.593 Utilization (in LBAs): 1310720 (5GiB) 00:07:01.593 Thin Provisioning: Not Supported 00:07:01.593 Per-NS Atomic Units: No 00:07:01.593 Maximum Single Source Range Length: 128 00:07:01.593 Maximum Copy Length: 128 00:07:01.593 Maximum Source Range Count: 128 00:07:01.593 NGUID/EUI64 Never Reused: No 00:07:01.593 Namespace Write Protected: No 00:07:01.593 Number of LBA Formats: 8 00:07:01.593 Current LBA Format: LBA Format #04 00:07:01.593 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:01.593 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:01.593 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:01.593 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:01.593 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:01.593 LBA Forma[2024-10-08 10:38:22.003282] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 76029 terminated unexpected 00:07:01.593 t #05: Data Size: 4096 Metadata Size: 8 00:07:01.593 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:01.593 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:01.593 00:07:01.593 NVM Specific Namespace Data 00:07:01.593 =========================== 00:07:01.593 Logical Block Storage Tag Mask: 0 00:07:01.593 Protection Information Capabilities: 00:07:01.593 16b Guard Protection Information Storage Tag Support: No 00:07:01.593 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:01.593 Storage Tag Check Read Support: No 00:07:01.593 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.593 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.594 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.594 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.594 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.594 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.594 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.594 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.594 ===================================================== 00:07:01.594 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:01.594 ===================================================== 00:07:01.594 Controller Capabilities/Features 00:07:01.594 ================================ 00:07:01.594 Vendor ID: 1b36 00:07:01.594 Subsystem Vendor ID: 1af4 00:07:01.594 Serial Number: 12342 00:07:01.594 Model Number: QEMU NVMe Ctrl 00:07:01.594 Firmware Version: 8.0.0 00:07:01.594 Recommended Arb Burst: 6 00:07:01.594 IEEE OUI Identifier: 00 54 52 00:07:01.594 Multi-path I/O 00:07:01.594 May have multiple subsystem ports: No 00:07:01.594 May have multiple controllers: No 00:07:01.594 Associated with SR-IOV VF: No 00:07:01.594 Max Data Transfer Size: 524288 00:07:01.594 Max Number of Namespaces: 256 00:07:01.594 Max Number of I/O Queues: 64 00:07:01.594 NVMe Specification Version (VS): 1.4 00:07:01.594 NVMe Specification Version (Identify): 1.4 00:07:01.594 Maximum Queue Entries: 2048 00:07:01.594 Contiguous Queues Required: Yes 00:07:01.594 Arbitration Mechanisms Supported 00:07:01.594 Weighted Round Robin: Not Supported 00:07:01.594 Vendor Specific: Not Supported 00:07:01.594 Reset Timeout: 7500 ms 00:07:01.594 Doorbell Stride: 4 bytes 00:07:01.594 NVM Subsystem Reset: Not Supported 00:07:01.594 Command Sets Supported 00:07:01.594 NVM Command Set: Supported 00:07:01.594 Boot Partition: Not Supported 00:07:01.594 Memory Page Size Minimum: 4096 bytes 00:07:01.594 Memory Page Size Maximum: 65536 bytes 00:07:01.594 Persistent Memory Region: Not Supported 00:07:01.594 Optional Asynchronous Events Supported 00:07:01.594 Namespace Attribute Notices: Supported 00:07:01.594 Firmware Activation Notices: Not Supported 00:07:01.594 ANA Change Notices: Not Supported 00:07:01.594 PLE Aggregate Log Change Notices: Not Supported 00:07:01.594 LBA Status Info Alert Notices: Not Supported 00:07:01.594 EGE Aggregate Log Change Notices: Not Supported 00:07:01.594 Normal NVM Subsystem Shutdown event: Not Supported 00:07:01.594 Zone Descriptor Change Notices: Not Supported 00:07:01.594 Discovery Log Change Notices: Not Supported 00:07:01.594 Controller Attributes 00:07:01.594 128-bit Host Identifier: Not Supported 00:07:01.594 Non-Operational Permissive Mode: Not Supported 00:07:01.594 NVM Sets: Not Supported 00:07:01.594 Read Recovery Levels: Not Supported 00:07:01.594 Endurance Groups: Not Supported 00:07:01.594 Predictable Latency Mode: Not Supported 00:07:01.594 Traffic Based Keep ALive: Not Supported 00:07:01.594 Namespace Granularity: Not Supported 00:07:01.594 SQ Associations: Not Supported 00:07:01.594 UUID List: Not Supported 00:07:01.594 Multi-Domain Subsystem: Not Supported 00:07:01.594 Fixed Capacity Management: Not Supported 00:07:01.594 Variable Capacity Management: Not Supported 00:07:01.594 Delete Endurance Group: Not Supported 00:07:01.594 Delete NVM Set: Not Supported 00:07:01.594 Extended LBA Formats Supported: Supported 00:07:01.594 Flexible Data Placement Supported: Not Supported 00:07:01.594 00:07:01.594 Controller Memory Buffer Support 00:07:01.594 ================================ 00:07:01.594 Supported: No 00:07:01.594 00:07:01.594 Persistent Memory Region Support 00:07:01.594 ================================ 00:07:01.594 Supported: No 00:07:01.594 00:07:01.594 Admin Command Set Attributes 00:07:01.594 ============================ 00:07:01.594 Security Send/Receive: Not Supported 00:07:01.594 Format NVM: Supported 00:07:01.594 Firmware Activate/Download: Not Supported 00:07:01.594 Namespace Management: Supported 00:07:01.594 Device Self-Test: Not Supported 00:07:01.594 Directives: Supported 00:07:01.594 NVMe-MI: Not Supported 00:07:01.594 Virtualization Management: Not Supported 00:07:01.594 Doorbell Buffer Config: Supported 00:07:01.594 Get LBA Status Capability: Not Supported 00:07:01.594 Command & Feature Lockdown Capability: Not Supported 00:07:01.594 Abort Command Limit: 4 00:07:01.594 Async Event Request Limit: 4 00:07:01.594 Number of Firmware Slots: N/A 00:07:01.594 Firmware Slot 1 Read-Only: N/A 00:07:01.594 Firmware Activation Without Reset: N/A 00:07:01.594 Multiple Update Detection Support: N/A 00:07:01.594 Firmware Update Granularity: No Information Provided 00:07:01.594 Per-Namespace SMART Log: Yes 00:07:01.594 Asymmetric Namespace Access Log Page: Not Supported 00:07:01.594 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:01.594 Command Effects Log Page: Supported 00:07:01.594 Get Log Page Extended Data: Supported 00:07:01.594 Telemetry Log Pages: Not Supported 00:07:01.594 Persistent Event Log Pages: Not Supported 00:07:01.594 Supported Log Pages Log Page: May Support 00:07:01.594 Commands Supported & Effects Log Page: Not Supported 00:07:01.594 Feature Identifiers & Effects Log Page:May Support 00:07:01.594 NVMe-MI Commands & Effects Log Page: May Support 00:07:01.594 Data Area 4 for Telemetry Log: Not Supported 00:07:01.594 Error Log Page Entries Supported: 1 00:07:01.594 Keep Alive: Not Supported 00:07:01.594 00:07:01.594 NVM Command Set Attributes 00:07:01.594 ========================== 00:07:01.594 Submission Queue Entry Size 00:07:01.594 Max: 64 00:07:01.594 Min: 64 00:07:01.594 Completion Queue Entry Size 00:07:01.594 Max: 16 00:07:01.594 Min: 16 00:07:01.594 Number of Namespaces: 256 00:07:01.594 Compare Command: Supported 00:07:01.594 Write Uncorrectable Command: Not Supported 00:07:01.594 Dataset Management Command: Supported 00:07:01.594 Write Zeroes Command: Supported 00:07:01.594 Set Features Save Field: Supported 00:07:01.594 Reservations: Not Supported 00:07:01.594 Timestamp: Supported 00:07:01.594 Copy: Supported 00:07:01.594 Volatile Write Cache: Present 00:07:01.594 Atomic Write Unit (Normal): 1 00:07:01.594 Atomic Write Unit (PFail): 1 00:07:01.594 Atomic Compare & Write Unit: 1 00:07:01.594 Fused Compare & Write: Not Supported 00:07:01.594 Scatter-Gather List 00:07:01.594 SGL Command Set: Supported 00:07:01.594 SGL Keyed: Not Supported 00:07:01.594 SGL Bit Bucket Descriptor: Not Supported 00:07:01.594 SGL Metadata Pointer: Not Supported 00:07:01.594 Oversized SGL: Not Supported 00:07:01.594 SGL Metadata Address: Not Supported 00:07:01.594 SGL Offset: Not Supported 00:07:01.594 Transport SGL Data Block: Not Supported 00:07:01.594 Replay Protected Memory Block: Not Supported 00:07:01.594 00:07:01.594 Firmware Slot Information 00:07:01.594 ========================= 00:07:01.594 Active slot: 1 00:07:01.594 Slot 1 Firmware Revision: 1.0 00:07:01.594 00:07:01.594 00:07:01.594 Commands Supported and Effects 00:07:01.594 ============================== 00:07:01.594 Admin Commands 00:07:01.594 -------------- 00:07:01.594 Delete I/O Submission Queue (00h): Supported 00:07:01.594 Create I/O Submission Queue (01h): Supported 00:07:01.594 Get Log Page (02h): Supported 00:07:01.594 Delete I/O Completion Queue (04h): Supported 00:07:01.594 Create I/O Completion Queue (05h): Supported 00:07:01.594 Identify (06h): Supported 00:07:01.594 Abort (08h): Supported 00:07:01.594 Set Features (09h): Supported 00:07:01.594 Get Features (0Ah): Supported 00:07:01.594 Asynchronous Event Request (0Ch): Supported 00:07:01.595 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:01.595 Directive Send (19h): Supported 00:07:01.595 Directive Receive (1Ah): Supported 00:07:01.595 Virtualization Management (1Ch): Supported 00:07:01.595 Doorbell Buffer Config (7Ch): Supported 00:07:01.595 Format NVM (80h): Supported LBA-Change 00:07:01.595 I/O Commands 00:07:01.595 ------------ 00:07:01.595 Flush (00h): Supported LBA-Change 00:07:01.595 Write (01h): Supported LBA-Change 00:07:01.595 Read (02h): Supported 00:07:01.595 Compare (05h): Supported 00:07:01.595 Write Zeroes (08h): Supported LBA-Change 00:07:01.595 Dataset Management (09h): Supported LBA-Change 00:07:01.595 Unknown (0Ch): Supported 00:07:01.595 Unknown (12h): Supported 00:07:01.595 Copy (19h): Supported LBA-Change 00:07:01.595 Unknown (1Dh): Supported LBA-Change 00:07:01.595 00:07:01.595 Error Log 00:07:01.595 ========= 00:07:01.595 00:07:01.595 Arbitration 00:07:01.595 =========== 00:07:01.595 Arbitration Burst: no limit 00:07:01.595 00:07:01.595 Power Management 00:07:01.595 ================ 00:07:01.595 Number of Power States: 1 00:07:01.595 Current Power State: Power State #0 00:07:01.595 Power State #0: 00:07:01.595 Max Power: 25.00 W 00:07:01.595 Non-Operational State: Operational 00:07:01.595 Entry Latency: 16 microseconds 00:07:01.595 Exit Latency: 4 microseconds 00:07:01.595 Relative Read Throughput: 0 00:07:01.595 Relative Read Latency: 0 00:07:01.595 Relative Write Throughput: 0 00:07:01.595 Relative Write Latency: 0 00:07:01.595 Idle Power: Not Reported 00:07:01.595 Active Power: Not Reported 00:07:01.595 Non-Operational Permissive Mode: Not Supported 00:07:01.595 00:07:01.595 Health Information 00:07:01.595 ================== 00:07:01.595 Critical Warnings: 00:07:01.595 Available Spare Space: OK 00:07:01.595 Temperature: OK 00:07:01.595 Device Reliability: OK 00:07:01.595 Read Only: No 00:07:01.595 Volatile Memory Backup: OK 00:07:01.595 Current Temperature: 323 Kelvin (50 Celsius) 00:07:01.595 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:01.595 Available Spare: 0% 00:07:01.595 Available Spare Threshold: 0% 00:07:01.595 Life Percentage Used: 0% 00:07:01.595 Data Units Read: 2316 00:07:01.595 Data Units Written: 2103 00:07:01.595 Host Read Commands: 123211 00:07:01.595 Host Write Commands: 121480 00:07:01.595 Controller Busy Time: 0 minutes 00:07:01.595 Power Cycles: 0 00:07:01.595 Power On Hours: 0 hours 00:07:01.595 Unsafe Shutdowns: 0 00:07:01.595 Unrecoverable Media Errors: 0 00:07:01.595 Lifetime Error Log Entries: 0 00:07:01.595 Warning Temperature Time: 0 minutes 00:07:01.595 Critical Temperature Time: 0 minutes 00:07:01.595 00:07:01.595 Number of Queues 00:07:01.595 ================ 00:07:01.595 Number of I/O Submission Queues: 64 00:07:01.595 Number of I/O Completion Queues: 64 00:07:01.595 00:07:01.595 ZNS Specific Controller Data 00:07:01.595 ============================ 00:07:01.595 Zone Append Size Limit: 0 00:07:01.595 00:07:01.595 00:07:01.595 Active Namespaces 00:07:01.595 ================= 00:07:01.595 Namespace ID:1 00:07:01.595 Error Recovery Timeout: Unlimited 00:07:01.595 Command Set Identifier: NVM (00h) 00:07:01.595 Deallocate: Supported 00:07:01.595 Deallocated/Unwritten Error: Supported 00:07:01.595 Deallocated Read Value: All 0x00 00:07:01.595 Deallocate in Write Zeroes: Not Supported 00:07:01.595 Deallocated Guard Field: 0xFFFF 00:07:01.595 Flush: Supported 00:07:01.595 Reservation: Not Supported 00:07:01.595 Namespace Sharing Capabilities: Private 00:07:01.595 Size (in LBAs): 1048576 (4GiB) 00:07:01.595 Capacity (in LBAs): 1048576 (4GiB) 00:07:01.595 Utilization (in LBAs): 1048576 (4GiB) 00:07:01.595 Thin Provisioning: Not Supported 00:07:01.595 Per-NS Atomic Units: No 00:07:01.595 Maximum Single Source Range Length: 128 00:07:01.595 Maximum Copy Length: 128 00:07:01.595 Maximum Source Range Count: 128 00:07:01.595 NGUID/EUI64 Never Reused: No 00:07:01.595 Namespace Write Protected: No 00:07:01.595 Number of LBA Formats: 8 00:07:01.595 Current LBA Format: LBA Format #04 00:07:01.595 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:01.595 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:01.595 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:01.595 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:01.595 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:01.595 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:01.595 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:01.595 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:01.595 00:07:01.595 NVM Specific Namespace Data 00:07:01.595 =========================== 00:07:01.595 Logical Block Storage Tag Mask: 0 00:07:01.595 Protection Information Capabilities: 00:07:01.595 16b Guard Protection Information Storage Tag Support: No 00:07:01.595 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:01.595 Storage Tag Check Read Support: No 00:07:01.595 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.595 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.595 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.595 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.595 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.595 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.595 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.595 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.595 Namespace ID:2 00:07:01.595 Error Recovery Timeout: Unlimited 00:07:01.595 Command Set Identifier: NVM (00h) 00:07:01.595 Deallocate: Supported 00:07:01.595 Deallocated/Unwritten Error: Supported 00:07:01.595 Deallocated Read Value: All 0x00 00:07:01.595 Deallocate in Write Zeroes: Not Supported 00:07:01.595 Deallocated Guard Field: 0xFFFF 00:07:01.595 Flush: Supported 00:07:01.595 Reservation: Not Supported 00:07:01.595 Namespace Sharing Capabilities: Private 00:07:01.595 Size (in LBAs): 1048576 (4GiB) 00:07:01.595 Capacity (in LBAs): 1048576 (4GiB) 00:07:01.595 Utilization (in LBAs): 1048576 (4GiB) 00:07:01.595 Thin Provisioning: Not Supported 00:07:01.595 Per-NS Atomic Units: No 00:07:01.595 Maximum Single Source Range Length: 128 00:07:01.595 Maximum Copy Length: 128 00:07:01.595 Maximum Source Range Count: 128 00:07:01.595 NGUID/EUI64 Never Reused: No 00:07:01.595 Namespace Write Protected: No 00:07:01.595 Number of LBA Formats: 8 00:07:01.595 Current LBA Format: LBA Format #04 00:07:01.595 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:01.595 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:01.595 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:01.595 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:01.595 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:01.595 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:01.595 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:01.595 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:01.595 00:07:01.595 NVM Specific Namespace Data 00:07:01.595 =========================== 00:07:01.595 Logical Block Storage Tag Mask: 0 00:07:01.595 Protection Information Capabilities: 00:07:01.595 16b Guard Protection Information Storage Tag Support: No 00:07:01.595 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:01.595 Storage Tag Check Read Support: No 00:07:01.595 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.595 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.595 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.595 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.595 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.595 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.595 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.595 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.595 Namespace ID:3 00:07:01.595 Error Recovery Timeout: Unlimited 00:07:01.595 Command Set Identifier: NVM (00h) 00:07:01.595 Deallocate: Supported 00:07:01.595 Deallocated/Unwritten Error: Supported 00:07:01.595 Deallocated Read Value: All 0x00 00:07:01.595 Deallocate in Write Zeroes: Not Supported 00:07:01.595 Deallocated Guard Field: 0xFFFF 00:07:01.595 Flush: Supported 00:07:01.595 Reservation: Not Supported 00:07:01.595 Namespace Sharing Capabilities: Private 00:07:01.595 Size (in LBAs): 1048576 (4GiB) 00:07:01.595 Capacity (in LBAs): 1048576 (4GiB) 00:07:01.595 Utilization (in LBAs): 1048576 (4GiB) 00:07:01.595 Thin Provisioning: Not Supported 00:07:01.595 Per-NS Atomic Units: No 00:07:01.595 Maximum Single Source Range Length: 128 00:07:01.595 Maximum Copy Length: 128 00:07:01.595 Maximum Source Range Count: 128 00:07:01.595 NGUID/EUI64 Never Reused: No 00:07:01.595 Namespace Write Protected: No 00:07:01.595 Number of LBA Formats: 8 00:07:01.595 Current LBA Format: LBA Format #04 00:07:01.595 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:01.595 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:01.595 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:01.595 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:01.595 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:01.595 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:01.596 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:01.596 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:01.596 00:07:01.596 NVM Specific Namespace Data 00:07:01.596 =========================== 00:07:01.596 Logical Block Storage Tag Mask: 0 00:07:01.596 Protection Information Capabilities: 00:07:01.596 16b Guard Protection Information Storage Tag Support: No 00:07:01.596 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:01.596 Storage Tag Check Read Support: No 00:07:01.596 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.596 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.596 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.596 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.596 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.596 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.596 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.596 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.596 10:38:22 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:01.596 10:38:22 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:01.858 ===================================================== 00:07:01.858 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:01.858 ===================================================== 00:07:01.858 Controller Capabilities/Features 00:07:01.858 ================================ 00:07:01.858 Vendor ID: 1b36 00:07:01.858 Subsystem Vendor ID: 1af4 00:07:01.858 Serial Number: 12340 00:07:01.858 Model Number: QEMU NVMe Ctrl 00:07:01.858 Firmware Version: 8.0.0 00:07:01.858 Recommended Arb Burst: 6 00:07:01.858 IEEE OUI Identifier: 00 54 52 00:07:01.858 Multi-path I/O 00:07:01.858 May have multiple subsystem ports: No 00:07:01.858 May have multiple controllers: No 00:07:01.858 Associated with SR-IOV VF: No 00:07:01.858 Max Data Transfer Size: 524288 00:07:01.858 Max Number of Namespaces: 256 00:07:01.858 Max Number of I/O Queues: 64 00:07:01.858 NVMe Specification Version (VS): 1.4 00:07:01.858 NVMe Specification Version (Identify): 1.4 00:07:01.858 Maximum Queue Entries: 2048 00:07:01.858 Contiguous Queues Required: Yes 00:07:01.858 Arbitration Mechanisms Supported 00:07:01.858 Weighted Round Robin: Not Supported 00:07:01.858 Vendor Specific: Not Supported 00:07:01.858 Reset Timeout: 7500 ms 00:07:01.858 Doorbell Stride: 4 bytes 00:07:01.858 NVM Subsystem Reset: Not Supported 00:07:01.858 Command Sets Supported 00:07:01.858 NVM Command Set: Supported 00:07:01.858 Boot Partition: Not Supported 00:07:01.858 Memory Page Size Minimum: 4096 bytes 00:07:01.858 Memory Page Size Maximum: 65536 bytes 00:07:01.858 Persistent Memory Region: Not Supported 00:07:01.858 Optional Asynchronous Events Supported 00:07:01.858 Namespace Attribute Notices: Supported 00:07:01.858 Firmware Activation Notices: Not Supported 00:07:01.858 ANA Change Notices: Not Supported 00:07:01.858 PLE Aggregate Log Change Notices: Not Supported 00:07:01.858 LBA Status Info Alert Notices: Not Supported 00:07:01.858 EGE Aggregate Log Change Notices: Not Supported 00:07:01.858 Normal NVM Subsystem Shutdown event: Not Supported 00:07:01.858 Zone Descriptor Change Notices: Not Supported 00:07:01.858 Discovery Log Change Notices: Not Supported 00:07:01.858 Controller Attributes 00:07:01.858 128-bit Host Identifier: Not Supported 00:07:01.858 Non-Operational Permissive Mode: Not Supported 00:07:01.858 NVM Sets: Not Supported 00:07:01.858 Read Recovery Levels: Not Supported 00:07:01.858 Endurance Groups: Not Supported 00:07:01.858 Predictable Latency Mode: Not Supported 00:07:01.858 Traffic Based Keep ALive: Not Supported 00:07:01.858 Namespace Granularity: Not Supported 00:07:01.858 SQ Associations: Not Supported 00:07:01.858 UUID List: Not Supported 00:07:01.858 Multi-Domain Subsystem: Not Supported 00:07:01.858 Fixed Capacity Management: Not Supported 00:07:01.858 Variable Capacity Management: Not Supported 00:07:01.858 Delete Endurance Group: Not Supported 00:07:01.858 Delete NVM Set: Not Supported 00:07:01.858 Extended LBA Formats Supported: Supported 00:07:01.858 Flexible Data Placement Supported: Not Supported 00:07:01.858 00:07:01.858 Controller Memory Buffer Support 00:07:01.858 ================================ 00:07:01.858 Supported: No 00:07:01.858 00:07:01.858 Persistent Memory Region Support 00:07:01.858 ================================ 00:07:01.858 Supported: No 00:07:01.858 00:07:01.858 Admin Command Set Attributes 00:07:01.858 ============================ 00:07:01.858 Security Send/Receive: Not Supported 00:07:01.858 Format NVM: Supported 00:07:01.858 Firmware Activate/Download: Not Supported 00:07:01.858 Namespace Management: Supported 00:07:01.858 Device Self-Test: Not Supported 00:07:01.858 Directives: Supported 00:07:01.858 NVMe-MI: Not Supported 00:07:01.858 Virtualization Management: Not Supported 00:07:01.858 Doorbell Buffer Config: Supported 00:07:01.858 Get LBA Status Capability: Not Supported 00:07:01.858 Command & Feature Lockdown Capability: Not Supported 00:07:01.858 Abort Command Limit: 4 00:07:01.858 Async Event Request Limit: 4 00:07:01.858 Number of Firmware Slots: N/A 00:07:01.858 Firmware Slot 1 Read-Only: N/A 00:07:01.858 Firmware Activation Without Reset: N/A 00:07:01.858 Multiple Update Detection Support: N/A 00:07:01.858 Firmware Update Granularity: No Information Provided 00:07:01.858 Per-Namespace SMART Log: Yes 00:07:01.858 Asymmetric Namespace Access Log Page: Not Supported 00:07:01.858 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:01.858 Command Effects Log Page: Supported 00:07:01.858 Get Log Page Extended Data: Supported 00:07:01.858 Telemetry Log Pages: Not Supported 00:07:01.858 Persistent Event Log Pages: Not Supported 00:07:01.858 Supported Log Pages Log Page: May Support 00:07:01.858 Commands Supported & Effects Log Page: Not Supported 00:07:01.858 Feature Identifiers & Effects Log Page:May Support 00:07:01.858 NVMe-MI Commands & Effects Log Page: May Support 00:07:01.858 Data Area 4 for Telemetry Log: Not Supported 00:07:01.858 Error Log Page Entries Supported: 1 00:07:01.858 Keep Alive: Not Supported 00:07:01.858 00:07:01.858 NVM Command Set Attributes 00:07:01.858 ========================== 00:07:01.858 Submission Queue Entry Size 00:07:01.858 Max: 64 00:07:01.858 Min: 64 00:07:01.858 Completion Queue Entry Size 00:07:01.859 Max: 16 00:07:01.859 Min: 16 00:07:01.859 Number of Namespaces: 256 00:07:01.859 Compare Command: Supported 00:07:01.859 Write Uncorrectable Command: Not Supported 00:07:01.859 Dataset Management Command: Supported 00:07:01.859 Write Zeroes Command: Supported 00:07:01.859 Set Features Save Field: Supported 00:07:01.859 Reservations: Not Supported 00:07:01.859 Timestamp: Supported 00:07:01.859 Copy: Supported 00:07:01.859 Volatile Write Cache: Present 00:07:01.859 Atomic Write Unit (Normal): 1 00:07:01.859 Atomic Write Unit (PFail): 1 00:07:01.859 Atomic Compare & Write Unit: 1 00:07:01.859 Fused Compare & Write: Not Supported 00:07:01.859 Scatter-Gather List 00:07:01.859 SGL Command Set: Supported 00:07:01.859 SGL Keyed: Not Supported 00:07:01.859 SGL Bit Bucket Descriptor: Not Supported 00:07:01.859 SGL Metadata Pointer: Not Supported 00:07:01.859 Oversized SGL: Not Supported 00:07:01.859 SGL Metadata Address: Not Supported 00:07:01.859 SGL Offset: Not Supported 00:07:01.859 Transport SGL Data Block: Not Supported 00:07:01.859 Replay Protected Memory Block: Not Supported 00:07:01.859 00:07:01.859 Firmware Slot Information 00:07:01.859 ========================= 00:07:01.859 Active slot: 1 00:07:01.859 Slot 1 Firmware Revision: 1.0 00:07:01.859 00:07:01.859 00:07:01.859 Commands Supported and Effects 00:07:01.859 ============================== 00:07:01.859 Admin Commands 00:07:01.859 -------------- 00:07:01.859 Delete I/O Submission Queue (00h): Supported 00:07:01.859 Create I/O Submission Queue (01h): Supported 00:07:01.859 Get Log Page (02h): Supported 00:07:01.859 Delete I/O Completion Queue (04h): Supported 00:07:01.859 Create I/O Completion Queue (05h): Supported 00:07:01.859 Identify (06h): Supported 00:07:01.859 Abort (08h): Supported 00:07:01.859 Set Features (09h): Supported 00:07:01.859 Get Features (0Ah): Supported 00:07:01.859 Asynchronous Event Request (0Ch): Supported 00:07:01.859 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:01.859 Directive Send (19h): Supported 00:07:01.859 Directive Receive (1Ah): Supported 00:07:01.859 Virtualization Management (1Ch): Supported 00:07:01.859 Doorbell Buffer Config (7Ch): Supported 00:07:01.859 Format NVM (80h): Supported LBA-Change 00:07:01.859 I/O Commands 00:07:01.859 ------------ 00:07:01.859 Flush (00h): Supported LBA-Change 00:07:01.859 Write (01h): Supported LBA-Change 00:07:01.859 Read (02h): Supported 00:07:01.859 Compare (05h): Supported 00:07:01.859 Write Zeroes (08h): Supported LBA-Change 00:07:01.859 Dataset Management (09h): Supported LBA-Change 00:07:01.859 Unknown (0Ch): Supported 00:07:01.859 Unknown (12h): Supported 00:07:01.859 Copy (19h): Supported LBA-Change 00:07:01.859 Unknown (1Dh): Supported LBA-Change 00:07:01.859 00:07:01.859 Error Log 00:07:01.859 ========= 00:07:01.859 00:07:01.859 Arbitration 00:07:01.859 =========== 00:07:01.859 Arbitration Burst: no limit 00:07:01.859 00:07:01.859 Power Management 00:07:01.859 ================ 00:07:01.859 Number of Power States: 1 00:07:01.859 Current Power State: Power State #0 00:07:01.859 Power State #0: 00:07:01.859 Max Power: 25.00 W 00:07:01.859 Non-Operational State: Operational 00:07:01.859 Entry Latency: 16 microseconds 00:07:01.859 Exit Latency: 4 microseconds 00:07:01.859 Relative Read Throughput: 0 00:07:01.859 Relative Read Latency: 0 00:07:01.859 Relative Write Throughput: 0 00:07:01.859 Relative Write Latency: 0 00:07:01.859 Idle Power: Not Reported 00:07:01.859 Active Power: Not Reported 00:07:01.859 Non-Operational Permissive Mode: Not Supported 00:07:01.859 00:07:01.859 Health Information 00:07:01.859 ================== 00:07:01.859 Critical Warnings: 00:07:01.859 Available Spare Space: OK 00:07:01.859 Temperature: OK 00:07:01.859 Device Reliability: OK 00:07:01.859 Read Only: No 00:07:01.859 Volatile Memory Backup: OK 00:07:01.859 Current Temperature: 323 Kelvin (50 Celsius) 00:07:01.859 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:01.859 Available Spare: 0% 00:07:01.859 Available Spare Threshold: 0% 00:07:01.859 Life Percentage Used: 0% 00:07:01.859 Data Units Read: 708 00:07:01.859 Data Units Written: 636 00:07:01.859 Host Read Commands: 40288 00:07:01.859 Host Write Commands: 40074 00:07:01.859 Controller Busy Time: 0 minutes 00:07:01.859 Power Cycles: 0 00:07:01.859 Power On Hours: 0 hours 00:07:01.859 Unsafe Shutdowns: 0 00:07:01.859 Unrecoverable Media Errors: 0 00:07:01.859 Lifetime Error Log Entries: 0 00:07:01.859 Warning Temperature Time: 0 minutes 00:07:01.859 Critical Temperature Time: 0 minutes 00:07:01.859 00:07:01.859 Number of Queues 00:07:01.859 ================ 00:07:01.859 Number of I/O Submission Queues: 64 00:07:01.859 Number of I/O Completion Queues: 64 00:07:01.859 00:07:01.859 ZNS Specific Controller Data 00:07:01.859 ============================ 00:07:01.859 Zone Append Size Limit: 0 00:07:01.859 00:07:01.859 00:07:01.859 Active Namespaces 00:07:01.859 ================= 00:07:01.859 Namespace ID:1 00:07:01.859 Error Recovery Timeout: Unlimited 00:07:01.859 Command Set Identifier: NVM (00h) 00:07:01.859 Deallocate: Supported 00:07:01.859 Deallocated/Unwritten Error: Supported 00:07:01.859 Deallocated Read Value: All 0x00 00:07:01.859 Deallocate in Write Zeroes: Not Supported 00:07:01.859 Deallocated Guard Field: 0xFFFF 00:07:01.859 Flush: Supported 00:07:01.859 Reservation: Not Supported 00:07:01.859 Metadata Transferred as: Separate Metadata Buffer 00:07:01.859 Namespace Sharing Capabilities: Private 00:07:01.859 Size (in LBAs): 1548666 (5GiB) 00:07:01.859 Capacity (in LBAs): 1548666 (5GiB) 00:07:01.859 Utilization (in LBAs): 1548666 (5GiB) 00:07:01.859 Thin Provisioning: Not Supported 00:07:01.859 Per-NS Atomic Units: No 00:07:01.859 Maximum Single Source Range Length: 128 00:07:01.859 Maximum Copy Length: 128 00:07:01.859 Maximum Source Range Count: 128 00:07:01.859 NGUID/EUI64 Never Reused: No 00:07:01.859 Namespace Write Protected: No 00:07:01.859 Number of LBA Formats: 8 00:07:01.859 Current LBA Format: LBA Format #07 00:07:01.859 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:01.859 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:01.859 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:01.859 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:01.859 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:01.859 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:01.859 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:01.859 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:01.859 00:07:01.859 NVM Specific Namespace Data 00:07:01.859 =========================== 00:07:01.859 Logical Block Storage Tag Mask: 0 00:07:01.859 Protection Information Capabilities: 00:07:01.859 16b Guard Protection Information Storage Tag Support: No 00:07:01.859 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:01.859 Storage Tag Check Read Support: No 00:07:01.859 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.859 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.859 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.859 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.859 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.859 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.859 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.859 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:01.859 10:38:22 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:01.859 10:38:22 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:02.122 ===================================================== 00:07:02.122 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:02.122 ===================================================== 00:07:02.122 Controller Capabilities/Features 00:07:02.122 ================================ 00:07:02.122 Vendor ID: 1b36 00:07:02.122 Subsystem Vendor ID: 1af4 00:07:02.122 Serial Number: 12341 00:07:02.122 Model Number: QEMU NVMe Ctrl 00:07:02.122 Firmware Version: 8.0.0 00:07:02.122 Recommended Arb Burst: 6 00:07:02.122 IEEE OUI Identifier: 00 54 52 00:07:02.122 Multi-path I/O 00:07:02.122 May have multiple subsystem ports: No 00:07:02.122 May have multiple controllers: No 00:07:02.122 Associated with SR-IOV VF: No 00:07:02.122 Max Data Transfer Size: 524288 00:07:02.122 Max Number of Namespaces: 256 00:07:02.122 Max Number of I/O Queues: 64 00:07:02.122 NVMe Specification Version (VS): 1.4 00:07:02.122 NVMe Specification Version (Identify): 1.4 00:07:02.122 Maximum Queue Entries: 2048 00:07:02.122 Contiguous Queues Required: Yes 00:07:02.122 Arbitration Mechanisms Supported 00:07:02.122 Weighted Round Robin: Not Supported 00:07:02.122 Vendor Specific: Not Supported 00:07:02.122 Reset Timeout: 7500 ms 00:07:02.122 Doorbell Stride: 4 bytes 00:07:02.122 NVM Subsystem Reset: Not Supported 00:07:02.122 Command Sets Supported 00:07:02.122 NVM Command Set: Supported 00:07:02.122 Boot Partition: Not Supported 00:07:02.122 Memory Page Size Minimum: 4096 bytes 00:07:02.122 Memory Page Size Maximum: 65536 bytes 00:07:02.122 Persistent Memory Region: Not Supported 00:07:02.122 Optional Asynchronous Events Supported 00:07:02.140 Namespace Attribute Notices: Supported 00:07:02.140 Firmware Activation Notices: Not Supported 00:07:02.140 ANA Change Notices: Not Supported 00:07:02.140 PLE Aggregate Log Change Notices: Not Supported 00:07:02.140 LBA Status Info Alert Notices: Not Supported 00:07:02.140 EGE Aggregate Log Change Notices: Not Supported 00:07:02.140 Normal NVM Subsystem Shutdown event: Not Supported 00:07:02.140 Zone Descriptor Change Notices: Not Supported 00:07:02.140 Discovery Log Change Notices: Not Supported 00:07:02.141 Controller Attributes 00:07:02.141 128-bit Host Identifier: Not Supported 00:07:02.141 Non-Operational Permissive Mode: Not Supported 00:07:02.141 NVM Sets: Not Supported 00:07:02.141 Read Recovery Levels: Not Supported 00:07:02.141 Endurance Groups: Not Supported 00:07:02.141 Predictable Latency Mode: Not Supported 00:07:02.141 Traffic Based Keep ALive: Not Supported 00:07:02.141 Namespace Granularity: Not Supported 00:07:02.141 SQ Associations: Not Supported 00:07:02.141 UUID List: Not Supported 00:07:02.141 Multi-Domain Subsystem: Not Supported 00:07:02.141 Fixed Capacity Management: Not Supported 00:07:02.141 Variable Capacity Management: Not Supported 00:07:02.141 Delete Endurance Group: Not Supported 00:07:02.141 Delete NVM Set: Not Supported 00:07:02.141 Extended LBA Formats Supported: Supported 00:07:02.141 Flexible Data Placement Supported: Not Supported 00:07:02.141 00:07:02.141 Controller Memory Buffer Support 00:07:02.141 ================================ 00:07:02.141 Supported: No 00:07:02.141 00:07:02.141 Persistent Memory Region Support 00:07:02.141 ================================ 00:07:02.141 Supported: No 00:07:02.141 00:07:02.141 Admin Command Set Attributes 00:07:02.141 ============================ 00:07:02.141 Security Send/Receive: Not Supported 00:07:02.141 Format NVM: Supported 00:07:02.141 Firmware Activate/Download: Not Supported 00:07:02.141 Namespace Management: Supported 00:07:02.141 Device Self-Test: Not Supported 00:07:02.141 Directives: Supported 00:07:02.141 NVMe-MI: Not Supported 00:07:02.141 Virtualization Management: Not Supported 00:07:02.141 Doorbell Buffer Config: Supported 00:07:02.141 Get LBA Status Capability: Not Supported 00:07:02.141 Command & Feature Lockdown Capability: Not Supported 00:07:02.141 Abort Command Limit: 4 00:07:02.141 Async Event Request Limit: 4 00:07:02.141 Number of Firmware Slots: N/A 00:07:02.141 Firmware Slot 1 Read-Only: N/A 00:07:02.141 Firmware Activation Without Reset: N/A 00:07:02.141 Multiple Update Detection Support: N/A 00:07:02.141 Firmware Update Granularity: No Information Provided 00:07:02.141 Per-Namespace SMART Log: Yes 00:07:02.141 Asymmetric Namespace Access Log Page: Not Supported 00:07:02.141 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:02.141 Command Effects Log Page: Supported 00:07:02.141 Get Log Page Extended Data: Supported 00:07:02.141 Telemetry Log Pages: Not Supported 00:07:02.141 Persistent Event Log Pages: Not Supported 00:07:02.141 Supported Log Pages Log Page: May Support 00:07:02.141 Commands Supported & Effects Log Page: Not Supported 00:07:02.141 Feature Identifiers & Effects Log Page:May Support 00:07:02.141 NVMe-MI Commands & Effects Log Page: May Support 00:07:02.141 Data Area 4 for Telemetry Log: Not Supported 00:07:02.141 Error Log Page Entries Supported: 1 00:07:02.141 Keep Alive: Not Supported 00:07:02.141 00:07:02.141 NVM Command Set Attributes 00:07:02.141 ========================== 00:07:02.141 Submission Queue Entry Size 00:07:02.141 Max: 64 00:07:02.141 Min: 64 00:07:02.141 Completion Queue Entry Size 00:07:02.141 Max: 16 00:07:02.141 Min: 16 00:07:02.141 Number of Namespaces: 256 00:07:02.141 Compare Command: Supported 00:07:02.141 Write Uncorrectable Command: Not Supported 00:07:02.141 Dataset Management Command: Supported 00:07:02.141 Write Zeroes Command: Supported 00:07:02.141 Set Features Save Field: Supported 00:07:02.141 Reservations: Not Supported 00:07:02.141 Timestamp: Supported 00:07:02.141 Copy: Supported 00:07:02.141 Volatile Write Cache: Present 00:07:02.141 Atomic Write Unit (Normal): 1 00:07:02.141 Atomic Write Unit (PFail): 1 00:07:02.141 Atomic Compare & Write Unit: 1 00:07:02.141 Fused Compare & Write: Not Supported 00:07:02.141 Scatter-Gather List 00:07:02.141 SGL Command Set: Supported 00:07:02.141 SGL Keyed: Not Supported 00:07:02.141 SGL Bit Bucket Descriptor: Not Supported 00:07:02.141 SGL Metadata Pointer: Not Supported 00:07:02.141 Oversized SGL: Not Supported 00:07:02.141 SGL Metadata Address: Not Supported 00:07:02.141 SGL Offset: Not Supported 00:07:02.141 Transport SGL Data Block: Not Supported 00:07:02.141 Replay Protected Memory Block: Not Supported 00:07:02.141 00:07:02.141 Firmware Slot Information 00:07:02.141 ========================= 00:07:02.141 Active slot: 1 00:07:02.141 Slot 1 Firmware Revision: 1.0 00:07:02.141 00:07:02.141 00:07:02.141 Commands Supported and Effects 00:07:02.141 ============================== 00:07:02.141 Admin Commands 00:07:02.141 -------------- 00:07:02.141 Delete I/O Submission Queue (00h): Supported 00:07:02.141 Create I/O Submission Queue (01h): Supported 00:07:02.141 Get Log Page (02h): Supported 00:07:02.141 Delete I/O Completion Queue (04h): Supported 00:07:02.141 Create I/O Completion Queue (05h): Supported 00:07:02.141 Identify (06h): Supported 00:07:02.141 Abort (08h): Supported 00:07:02.141 Set Features (09h): Supported 00:07:02.141 Get Features (0Ah): Supported 00:07:02.141 Asynchronous Event Request (0Ch): Supported 00:07:02.141 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:02.141 Directive Send (19h): Supported 00:07:02.141 Directive Receive (1Ah): Supported 00:07:02.141 Virtualization Management (1Ch): Supported 00:07:02.141 Doorbell Buffer Config (7Ch): Supported 00:07:02.141 Format NVM (80h): Supported LBA-Change 00:07:02.141 I/O Commands 00:07:02.141 ------------ 00:07:02.141 Flush (00h): Supported LBA-Change 00:07:02.141 Write (01h): Supported LBA-Change 00:07:02.141 Read (02h): Supported 00:07:02.141 Compare (05h): Supported 00:07:02.141 Write Zeroes (08h): Supported LBA-Change 00:07:02.141 Dataset Management (09h): Supported LBA-Change 00:07:02.141 Unknown (0Ch): Supported 00:07:02.141 Unknown (12h): Supported 00:07:02.141 Copy (19h): Supported LBA-Change 00:07:02.141 Unknown (1Dh): Supported LBA-Change 00:07:02.141 00:07:02.141 Error Log 00:07:02.141 ========= 00:07:02.141 00:07:02.141 Arbitration 00:07:02.141 =========== 00:07:02.141 Arbitration Burst: no limit 00:07:02.141 00:07:02.141 Power Management 00:07:02.141 ================ 00:07:02.141 Number of Power States: 1 00:07:02.141 Current Power State: Power State #0 00:07:02.141 Power State #0: 00:07:02.141 Max Power: 25.00 W 00:07:02.141 Non-Operational State: Operational 00:07:02.141 Entry Latency: 16 microseconds 00:07:02.141 Exit Latency: 4 microseconds 00:07:02.141 Relative Read Throughput: 0 00:07:02.141 Relative Read Latency: 0 00:07:02.141 Relative Write Throughput: 0 00:07:02.141 Relative Write Latency: 0 00:07:02.141 Idle Power: Not Reported 00:07:02.141 Active Power: Not Reported 00:07:02.141 Non-Operational Permissive Mode: Not Supported 00:07:02.141 00:07:02.141 Health Information 00:07:02.141 ================== 00:07:02.141 Critical Warnings: 00:07:02.141 Available Spare Space: OK 00:07:02.141 Temperature: OK 00:07:02.141 Device Reliability: OK 00:07:02.141 Read Only: No 00:07:02.141 Volatile Memory Backup: OK 00:07:02.141 Current Temperature: 323 Kelvin (50 Celsius) 00:07:02.141 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:02.141 Available Spare: 0% 00:07:02.141 Available Spare Threshold: 0% 00:07:02.141 Life Percentage Used: 0% 00:07:02.141 Data Units Read: 1131 00:07:02.141 Data Units Written: 997 00:07:02.141 Host Read Commands: 60979 00:07:02.141 Host Write Commands: 59756 00:07:02.141 Controller Busy Time: 0 minutes 00:07:02.141 Power Cycles: 0 00:07:02.141 Power On Hours: 0 hours 00:07:02.141 Unsafe Shutdowns: 0 00:07:02.141 Unrecoverable Media Errors: 0 00:07:02.141 Lifetime Error Log Entries: 0 00:07:02.141 Warning Temperature Time: 0 minutes 00:07:02.141 Critical Temperature Time: 0 minutes 00:07:02.141 00:07:02.141 Number of Queues 00:07:02.141 ================ 00:07:02.141 Number of I/O Submission Queues: 64 00:07:02.141 Number of I/O Completion Queues: 64 00:07:02.141 00:07:02.141 ZNS Specific Controller Data 00:07:02.141 ============================ 00:07:02.141 Zone Append Size Limit: 0 00:07:02.141 00:07:02.141 00:07:02.141 Active Namespaces 00:07:02.141 ================= 00:07:02.141 Namespace ID:1 00:07:02.141 Error Recovery Timeout: Unlimited 00:07:02.141 Command Set Identifier: NVM (00h) 00:07:02.141 Deallocate: Supported 00:07:02.141 Deallocated/Unwritten Error: Supported 00:07:02.141 Deallocated Read Value: All 0x00 00:07:02.141 Deallocate in Write Zeroes: Not Supported 00:07:02.141 Deallocated Guard Field: 0xFFFF 00:07:02.141 Flush: Supported 00:07:02.141 Reservation: Not Supported 00:07:02.141 Namespace Sharing Capabilities: Private 00:07:02.141 Size (in LBAs): 1310720 (5GiB) 00:07:02.141 Capacity (in LBAs): 1310720 (5GiB) 00:07:02.141 Utilization (in LBAs): 1310720 (5GiB) 00:07:02.141 Thin Provisioning: Not Supported 00:07:02.141 Per-NS Atomic Units: No 00:07:02.142 Maximum Single Source Range Length: 128 00:07:02.142 Maximum Copy Length: 128 00:07:02.142 Maximum Source Range Count: 128 00:07:02.142 NGUID/EUI64 Never Reused: No 00:07:02.142 Namespace Write Protected: No 00:07:02.142 Number of LBA Formats: 8 00:07:02.142 Current LBA Format: LBA Format #04 00:07:02.142 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:02.142 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:02.142 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:02.142 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:02.142 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:02.142 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:02.142 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:02.142 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:02.142 00:07:02.142 NVM Specific Namespace Data 00:07:02.142 =========================== 00:07:02.142 Logical Block Storage Tag Mask: 0 00:07:02.142 Protection Information Capabilities: 00:07:02.142 16b Guard Protection Information Storage Tag Support: No 00:07:02.142 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:02.142 Storage Tag Check Read Support: No 00:07:02.142 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.142 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.142 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.142 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.142 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.142 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.142 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.142 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.142 10:38:22 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:02.142 10:38:22 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:02.142 ===================================================== 00:07:02.142 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:02.142 ===================================================== 00:07:02.142 Controller Capabilities/Features 00:07:02.142 ================================ 00:07:02.142 Vendor ID: 1b36 00:07:02.142 Subsystem Vendor ID: 1af4 00:07:02.142 Serial Number: 12342 00:07:02.142 Model Number: QEMU NVMe Ctrl 00:07:02.142 Firmware Version: 8.0.0 00:07:02.142 Recommended Arb Burst: 6 00:07:02.142 IEEE OUI Identifier: 00 54 52 00:07:02.142 Multi-path I/O 00:07:02.142 May have multiple subsystem ports: No 00:07:02.142 May have multiple controllers: No 00:07:02.142 Associated with SR-IOV VF: No 00:07:02.142 Max Data Transfer Size: 524288 00:07:02.142 Max Number of Namespaces: 256 00:07:02.142 Max Number of I/O Queues: 64 00:07:02.142 NVMe Specification Version (VS): 1.4 00:07:02.142 NVMe Specification Version (Identify): 1.4 00:07:02.142 Maximum Queue Entries: 2048 00:07:02.142 Contiguous Queues Required: Yes 00:07:02.142 Arbitration Mechanisms Supported 00:07:02.142 Weighted Round Robin: Not Supported 00:07:02.142 Vendor Specific: Not Supported 00:07:02.142 Reset Timeout: 7500 ms 00:07:02.142 Doorbell Stride: 4 bytes 00:07:02.142 NVM Subsystem Reset: Not Supported 00:07:02.142 Command Sets Supported 00:07:02.142 NVM Command Set: Supported 00:07:02.142 Boot Partition: Not Supported 00:07:02.142 Memory Page Size Minimum: 4096 bytes 00:07:02.142 Memory Page Size Maximum: 65536 bytes 00:07:02.142 Persistent Memory Region: Not Supported 00:07:02.142 Optional Asynchronous Events Supported 00:07:02.142 Namespace Attribute Notices: Supported 00:07:02.142 Firmware Activation Notices: Not Supported 00:07:02.142 ANA Change Notices: Not Supported 00:07:02.142 PLE Aggregate Log Change Notices: Not Supported 00:07:02.142 LBA Status Info Alert Notices: Not Supported 00:07:02.142 EGE Aggregate Log Change Notices: Not Supported 00:07:02.142 Normal NVM Subsystem Shutdown event: Not Supported 00:07:02.142 Zone Descriptor Change Notices: Not Supported 00:07:02.142 Discovery Log Change Notices: Not Supported 00:07:02.142 Controller Attributes 00:07:02.142 128-bit Host Identifier: Not Supported 00:07:02.142 Non-Operational Permissive Mode: Not Supported 00:07:02.142 NVM Sets: Not Supported 00:07:02.142 Read Recovery Levels: Not Supported 00:07:02.142 Endurance Groups: Not Supported 00:07:02.142 Predictable Latency Mode: Not Supported 00:07:02.142 Traffic Based Keep ALive: Not Supported 00:07:02.142 Namespace Granularity: Not Supported 00:07:02.142 SQ Associations: Not Supported 00:07:02.142 UUID List: Not Supported 00:07:02.142 Multi-Domain Subsystem: Not Supported 00:07:02.142 Fixed Capacity Management: Not Supported 00:07:02.142 Variable Capacity Management: Not Supported 00:07:02.142 Delete Endurance Group: Not Supported 00:07:02.142 Delete NVM Set: Not Supported 00:07:02.142 Extended LBA Formats Supported: Supported 00:07:02.142 Flexible Data Placement Supported: Not Supported 00:07:02.142 00:07:02.142 Controller Memory Buffer Support 00:07:02.142 ================================ 00:07:02.142 Supported: No 00:07:02.142 00:07:02.142 Persistent Memory Region Support 00:07:02.142 ================================ 00:07:02.142 Supported: No 00:07:02.142 00:07:02.142 Admin Command Set Attributes 00:07:02.142 ============================ 00:07:02.142 Security Send/Receive: Not Supported 00:07:02.142 Format NVM: Supported 00:07:02.142 Firmware Activate/Download: Not Supported 00:07:02.142 Namespace Management: Supported 00:07:02.142 Device Self-Test: Not Supported 00:07:02.142 Directives: Supported 00:07:02.142 NVMe-MI: Not Supported 00:07:02.142 Virtualization Management: Not Supported 00:07:02.142 Doorbell Buffer Config: Supported 00:07:02.142 Get LBA Status Capability: Not Supported 00:07:02.142 Command & Feature Lockdown Capability: Not Supported 00:07:02.142 Abort Command Limit: 4 00:07:02.142 Async Event Request Limit: 4 00:07:02.142 Number of Firmware Slots: N/A 00:07:02.142 Firmware Slot 1 Read-Only: N/A 00:07:02.142 Firmware Activation Without Reset: N/A 00:07:02.142 Multiple Update Detection Support: N/A 00:07:02.142 Firmware Update Granularity: No Information Provided 00:07:02.142 Per-Namespace SMART Log: Yes 00:07:02.142 Asymmetric Namespace Access Log Page: Not Supported 00:07:02.142 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:02.142 Command Effects Log Page: Supported 00:07:02.142 Get Log Page Extended Data: Supported 00:07:02.142 Telemetry Log Pages: Not Supported 00:07:02.142 Persistent Event Log Pages: Not Supported 00:07:02.142 Supported Log Pages Log Page: May Support 00:07:02.142 Commands Supported & Effects Log Page: Not Supported 00:07:02.142 Feature Identifiers & Effects Log Page:May Support 00:07:02.142 NVMe-MI Commands & Effects Log Page: May Support 00:07:02.142 Data Area 4 for Telemetry Log: Not Supported 00:07:02.142 Error Log Page Entries Supported: 1 00:07:02.142 Keep Alive: Not Supported 00:07:02.142 00:07:02.142 NVM Command Set Attributes 00:07:02.142 ========================== 00:07:02.142 Submission Queue Entry Size 00:07:02.142 Max: 64 00:07:02.142 Min: 64 00:07:02.142 Completion Queue Entry Size 00:07:02.142 Max: 16 00:07:02.142 Min: 16 00:07:02.142 Number of Namespaces: 256 00:07:02.142 Compare Command: Supported 00:07:02.142 Write Uncorrectable Command: Not Supported 00:07:02.142 Dataset Management Command: Supported 00:07:02.142 Write Zeroes Command: Supported 00:07:02.142 Set Features Save Field: Supported 00:07:02.142 Reservations: Not Supported 00:07:02.142 Timestamp: Supported 00:07:02.142 Copy: Supported 00:07:02.142 Volatile Write Cache: Present 00:07:02.142 Atomic Write Unit (Normal): 1 00:07:02.142 Atomic Write Unit (PFail): 1 00:07:02.142 Atomic Compare & Write Unit: 1 00:07:02.142 Fused Compare & Write: Not Supported 00:07:02.142 Scatter-Gather List 00:07:02.142 SGL Command Set: Supported 00:07:02.142 SGL Keyed: Not Supported 00:07:02.142 SGL Bit Bucket Descriptor: Not Supported 00:07:02.142 SGL Metadata Pointer: Not Supported 00:07:02.142 Oversized SGL: Not Supported 00:07:02.142 SGL Metadata Address: Not Supported 00:07:02.142 SGL Offset: Not Supported 00:07:02.142 Transport SGL Data Block: Not Supported 00:07:02.142 Replay Protected Memory Block: Not Supported 00:07:02.142 00:07:02.142 Firmware Slot Information 00:07:02.142 ========================= 00:07:02.142 Active slot: 1 00:07:02.142 Slot 1 Firmware Revision: 1.0 00:07:02.142 00:07:02.142 00:07:02.142 Commands Supported and Effects 00:07:02.142 ============================== 00:07:02.142 Admin Commands 00:07:02.142 -------------- 00:07:02.142 Delete I/O Submission Queue (00h): Supported 00:07:02.142 Create I/O Submission Queue (01h): Supported 00:07:02.142 Get Log Page (02h): Supported 00:07:02.142 Delete I/O Completion Queue (04h): Supported 00:07:02.142 Create I/O Completion Queue (05h): Supported 00:07:02.143 Identify (06h): Supported 00:07:02.143 Abort (08h): Supported 00:07:02.143 Set Features (09h): Supported 00:07:02.143 Get Features (0Ah): Supported 00:07:02.143 Asynchronous Event Request (0Ch): Supported 00:07:02.143 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:02.143 Directive Send (19h): Supported 00:07:02.143 Directive Receive (1Ah): Supported 00:07:02.143 Virtualization Management (1Ch): Supported 00:07:02.143 Doorbell Buffer Config (7Ch): Supported 00:07:02.143 Format NVM (80h): Supported LBA-Change 00:07:02.143 I/O Commands 00:07:02.143 ------------ 00:07:02.143 Flush (00h): Supported LBA-Change 00:07:02.143 Write (01h): Supported LBA-Change 00:07:02.143 Read (02h): Supported 00:07:02.143 Compare (05h): Supported 00:07:02.143 Write Zeroes (08h): Supported LBA-Change 00:07:02.143 Dataset Management (09h): Supported LBA-Change 00:07:02.143 Unknown (0Ch): Supported 00:07:02.143 Unknown (12h): Supported 00:07:02.143 Copy (19h): Supported LBA-Change 00:07:02.143 Unknown (1Dh): Supported LBA-Change 00:07:02.143 00:07:02.143 Error Log 00:07:02.143 ========= 00:07:02.143 00:07:02.143 Arbitration 00:07:02.143 =========== 00:07:02.143 Arbitration Burst: no limit 00:07:02.143 00:07:02.143 Power Management 00:07:02.143 ================ 00:07:02.143 Number of Power States: 1 00:07:02.143 Current Power State: Power State #0 00:07:02.143 Power State #0: 00:07:02.143 Max Power: 25.00 W 00:07:02.143 Non-Operational State: Operational 00:07:02.143 Entry Latency: 16 microseconds 00:07:02.143 Exit Latency: 4 microseconds 00:07:02.143 Relative Read Throughput: 0 00:07:02.143 Relative Read Latency: 0 00:07:02.143 Relative Write Throughput: 0 00:07:02.143 Relative Write Latency: 0 00:07:02.143 Idle Power: Not Reported 00:07:02.143 Active Power: Not Reported 00:07:02.143 Non-Operational Permissive Mode: Not Supported 00:07:02.143 00:07:02.143 Health Information 00:07:02.143 ================== 00:07:02.143 Critical Warnings: 00:07:02.143 Available Spare Space: OK 00:07:02.143 Temperature: OK 00:07:02.143 Device Reliability: OK 00:07:02.143 Read Only: No 00:07:02.143 Volatile Memory Backup: OK 00:07:02.143 Current Temperature: 323 Kelvin (50 Celsius) 00:07:02.143 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:02.143 Available Spare: 0% 00:07:02.143 Available Spare Threshold: 0% 00:07:02.143 Life Percentage Used: 0% 00:07:02.143 Data Units Read: 2316 00:07:02.143 Data Units Written: 2103 00:07:02.143 Host Read Commands: 123211 00:07:02.143 Host Write Commands: 121480 00:07:02.143 Controller Busy Time: 0 minutes 00:07:02.143 Power Cycles: 0 00:07:02.143 Power On Hours: 0 hours 00:07:02.143 Unsafe Shutdowns: 0 00:07:02.143 Unrecoverable Media Errors: 0 00:07:02.143 Lifetime Error Log Entries: 0 00:07:02.143 Warning Temperature Time: 0 minutes 00:07:02.143 Critical Temperature Time: 0 minutes 00:07:02.143 00:07:02.143 Number of Queues 00:07:02.143 ================ 00:07:02.143 Number of I/O Submission Queues: 64 00:07:02.143 Number of I/O Completion Queues: 64 00:07:02.143 00:07:02.143 ZNS Specific Controller Data 00:07:02.143 ============================ 00:07:02.143 Zone Append Size Limit: 0 00:07:02.143 00:07:02.143 00:07:02.143 Active Namespaces 00:07:02.143 ================= 00:07:02.143 Namespace ID:1 00:07:02.143 Error Recovery Timeout: Unlimited 00:07:02.143 Command Set Identifier: NVM (00h) 00:07:02.143 Deallocate: Supported 00:07:02.143 Deallocated/Unwritten Error: Supported 00:07:02.143 Deallocated Read Value: All 0x00 00:07:02.143 Deallocate in Write Zeroes: Not Supported 00:07:02.143 Deallocated Guard Field: 0xFFFF 00:07:02.143 Flush: Supported 00:07:02.143 Reservation: Not Supported 00:07:02.143 Namespace Sharing Capabilities: Private 00:07:02.143 Size (in LBAs): 1048576 (4GiB) 00:07:02.143 Capacity (in LBAs): 1048576 (4GiB) 00:07:02.143 Utilization (in LBAs): 1048576 (4GiB) 00:07:02.143 Thin Provisioning: Not Supported 00:07:02.143 Per-NS Atomic Units: No 00:07:02.143 Maximum Single Source Range Length: 128 00:07:02.143 Maximum Copy Length: 128 00:07:02.143 Maximum Source Range Count: 128 00:07:02.143 NGUID/EUI64 Never Reused: No 00:07:02.143 Namespace Write Protected: No 00:07:02.143 Number of LBA Formats: 8 00:07:02.143 Current LBA Format: LBA Format #04 00:07:02.143 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:02.143 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:02.143 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:02.143 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:02.143 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:02.143 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:02.143 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:02.143 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:02.143 00:07:02.143 NVM Specific Namespace Data 00:07:02.143 =========================== 00:07:02.143 Logical Block Storage Tag Mask: 0 00:07:02.143 Protection Information Capabilities: 00:07:02.143 16b Guard Protection Information Storage Tag Support: No 00:07:02.143 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:02.143 Storage Tag Check Read Support: No 00:07:02.143 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.143 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.143 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.143 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.143 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.143 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.143 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.143 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.143 Namespace ID:2 00:07:02.143 Error Recovery Timeout: Unlimited 00:07:02.143 Command Set Identifier: NVM (00h) 00:07:02.143 Deallocate: Supported 00:07:02.143 Deallocated/Unwritten Error: Supported 00:07:02.143 Deallocated Read Value: All 0x00 00:07:02.143 Deallocate in Write Zeroes: Not Supported 00:07:02.143 Deallocated Guard Field: 0xFFFF 00:07:02.143 Flush: Supported 00:07:02.143 Reservation: Not Supported 00:07:02.143 Namespace Sharing Capabilities: Private 00:07:02.143 Size (in LBAs): 1048576 (4GiB) 00:07:02.143 Capacity (in LBAs): 1048576 (4GiB) 00:07:02.143 Utilization (in LBAs): 1048576 (4GiB) 00:07:02.143 Thin Provisioning: Not Supported 00:07:02.143 Per-NS Atomic Units: No 00:07:02.143 Maximum Single Source Range Length: 128 00:07:02.143 Maximum Copy Length: 128 00:07:02.143 Maximum Source Range Count: 128 00:07:02.143 NGUID/EUI64 Never Reused: No 00:07:02.143 Namespace Write Protected: No 00:07:02.143 Number of LBA Formats: 8 00:07:02.143 Current LBA Format: LBA Format #04 00:07:02.143 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:02.143 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:02.143 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:02.143 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:02.143 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:02.143 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:02.143 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:02.143 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:02.143 00:07:02.143 NVM Specific Namespace Data 00:07:02.143 =========================== 00:07:02.143 Logical Block Storage Tag Mask: 0 00:07:02.143 Protection Information Capabilities: 00:07:02.143 16b Guard Protection Information Storage Tag Support: No 00:07:02.143 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:02.143 Storage Tag Check Read Support: No 00:07:02.143 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.143 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.143 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.143 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.143 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.143 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.143 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.143 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.143 Namespace ID:3 00:07:02.143 Error Recovery Timeout: Unlimited 00:07:02.143 Command Set Identifier: NVM (00h) 00:07:02.143 Deallocate: Supported 00:07:02.143 Deallocated/Unwritten Error: Supported 00:07:02.143 Deallocated Read Value: All 0x00 00:07:02.143 Deallocate in Write Zeroes: Not Supported 00:07:02.143 Deallocated Guard Field: 0xFFFF 00:07:02.143 Flush: Supported 00:07:02.143 Reservation: Not Supported 00:07:02.143 Namespace Sharing Capabilities: Private 00:07:02.143 Size (in LBAs): 1048576 (4GiB) 00:07:02.143 Capacity (in LBAs): 1048576 (4GiB) 00:07:02.143 Utilization (in LBAs): 1048576 (4GiB) 00:07:02.143 Thin Provisioning: Not Supported 00:07:02.143 Per-NS Atomic Units: No 00:07:02.143 Maximum Single Source Range Length: 128 00:07:02.143 Maximum Copy Length: 128 00:07:02.143 Maximum Source Range Count: 128 00:07:02.143 NGUID/EUI64 Never Reused: No 00:07:02.143 Namespace Write Protected: No 00:07:02.143 Number of LBA Formats: 8 00:07:02.143 Current LBA Format: LBA Format #04 00:07:02.143 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:02.144 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:02.144 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:02.144 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:02.144 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:02.144 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:02.144 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:02.144 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:02.144 00:07:02.144 NVM Specific Namespace Data 00:07:02.144 =========================== 00:07:02.144 Logical Block Storage Tag Mask: 0 00:07:02.144 Protection Information Capabilities: 00:07:02.144 16b Guard Protection Information Storage Tag Support: No 00:07:02.144 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:02.404 Storage Tag Check Read Support: No 00:07:02.404 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.404 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.404 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.404 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.404 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.404 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.404 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.404 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.404 10:38:22 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:02.404 10:38:22 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:02.404 ===================================================== 00:07:02.404 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:02.404 ===================================================== 00:07:02.404 Controller Capabilities/Features 00:07:02.404 ================================ 00:07:02.404 Vendor ID: 1b36 00:07:02.404 Subsystem Vendor ID: 1af4 00:07:02.404 Serial Number: 12343 00:07:02.404 Model Number: QEMU NVMe Ctrl 00:07:02.404 Firmware Version: 8.0.0 00:07:02.404 Recommended Arb Burst: 6 00:07:02.404 IEEE OUI Identifier: 00 54 52 00:07:02.404 Multi-path I/O 00:07:02.404 May have multiple subsystem ports: No 00:07:02.404 May have multiple controllers: Yes 00:07:02.404 Associated with SR-IOV VF: No 00:07:02.404 Max Data Transfer Size: 524288 00:07:02.404 Max Number of Namespaces: 256 00:07:02.404 Max Number of I/O Queues: 64 00:07:02.404 NVMe Specification Version (VS): 1.4 00:07:02.404 NVMe Specification Version (Identify): 1.4 00:07:02.404 Maximum Queue Entries: 2048 00:07:02.404 Contiguous Queues Required: Yes 00:07:02.404 Arbitration Mechanisms Supported 00:07:02.404 Weighted Round Robin: Not Supported 00:07:02.404 Vendor Specific: Not Supported 00:07:02.404 Reset Timeout: 7500 ms 00:07:02.404 Doorbell Stride: 4 bytes 00:07:02.404 NVM Subsystem Reset: Not Supported 00:07:02.404 Command Sets Supported 00:07:02.404 NVM Command Set: Supported 00:07:02.404 Boot Partition: Not Supported 00:07:02.404 Memory Page Size Minimum: 4096 bytes 00:07:02.404 Memory Page Size Maximum: 65536 bytes 00:07:02.404 Persistent Memory Region: Not Supported 00:07:02.404 Optional Asynchronous Events Supported 00:07:02.404 Namespace Attribute Notices: Supported 00:07:02.404 Firmware Activation Notices: Not Supported 00:07:02.404 ANA Change Notices: Not Supported 00:07:02.404 PLE Aggregate Log Change Notices: Not Supported 00:07:02.404 LBA Status Info Alert Notices: Not Supported 00:07:02.404 EGE Aggregate Log Change Notices: Not Supported 00:07:02.404 Normal NVM Subsystem Shutdown event: Not Supported 00:07:02.404 Zone Descriptor Change Notices: Not Supported 00:07:02.404 Discovery Log Change Notices: Not Supported 00:07:02.404 Controller Attributes 00:07:02.404 128-bit Host Identifier: Not Supported 00:07:02.404 Non-Operational Permissive Mode: Not Supported 00:07:02.404 NVM Sets: Not Supported 00:07:02.404 Read Recovery Levels: Not Supported 00:07:02.404 Endurance Groups: Supported 00:07:02.404 Predictable Latency Mode: Not Supported 00:07:02.404 Traffic Based Keep ALive: Not Supported 00:07:02.404 Namespace Granularity: Not Supported 00:07:02.404 SQ Associations: Not Supported 00:07:02.404 UUID List: Not Supported 00:07:02.404 Multi-Domain Subsystem: Not Supported 00:07:02.404 Fixed Capacity Management: Not Supported 00:07:02.404 Variable Capacity Management: Not Supported 00:07:02.404 Delete Endurance Group: Not Supported 00:07:02.404 Delete NVM Set: Not Supported 00:07:02.404 Extended LBA Formats Supported: Supported 00:07:02.404 Flexible Data Placement Supported: Supported 00:07:02.404 00:07:02.404 Controller Memory Buffer Support 00:07:02.405 ================================ 00:07:02.405 Supported: No 00:07:02.405 00:07:02.405 Persistent Memory Region Support 00:07:02.405 ================================ 00:07:02.405 Supported: No 00:07:02.405 00:07:02.405 Admin Command Set Attributes 00:07:02.405 ============================ 00:07:02.405 Security Send/Receive: Not Supported 00:07:02.405 Format NVM: Supported 00:07:02.405 Firmware Activate/Download: Not Supported 00:07:02.405 Namespace Management: Supported 00:07:02.405 Device Self-Test: Not Supported 00:07:02.405 Directives: Supported 00:07:02.405 NVMe-MI: Not Supported 00:07:02.405 Virtualization Management: Not Supported 00:07:02.405 Doorbell Buffer Config: Supported 00:07:02.405 Get LBA Status Capability: Not Supported 00:07:02.405 Command & Feature Lockdown Capability: Not Supported 00:07:02.405 Abort Command Limit: 4 00:07:02.405 Async Event Request Limit: 4 00:07:02.405 Number of Firmware Slots: N/A 00:07:02.405 Firmware Slot 1 Read-Only: N/A 00:07:02.405 Firmware Activation Without Reset: N/A 00:07:02.405 Multiple Update Detection Support: N/A 00:07:02.405 Firmware Update Granularity: No Information Provided 00:07:02.405 Per-Namespace SMART Log: Yes 00:07:02.405 Asymmetric Namespace Access Log Page: Not Supported 00:07:02.405 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:02.405 Command Effects Log Page: Supported 00:07:02.405 Get Log Page Extended Data: Supported 00:07:02.405 Telemetry Log Pages: Not Supported 00:07:02.405 Persistent Event Log Pages: Not Supported 00:07:02.405 Supported Log Pages Log Page: May Support 00:07:02.405 Commands Supported & Effects Log Page: Not Supported 00:07:02.405 Feature Identifiers & Effects Log Page:May Support 00:07:02.405 NVMe-MI Commands & Effects Log Page: May Support 00:07:02.405 Data Area 4 for Telemetry Log: Not Supported 00:07:02.405 Error Log Page Entries Supported: 1 00:07:02.405 Keep Alive: Not Supported 00:07:02.405 00:07:02.405 NVM Command Set Attributes 00:07:02.405 ========================== 00:07:02.405 Submission Queue Entry Size 00:07:02.405 Max: 64 00:07:02.405 Min: 64 00:07:02.405 Completion Queue Entry Size 00:07:02.405 Max: 16 00:07:02.405 Min: 16 00:07:02.405 Number of Namespaces: 256 00:07:02.405 Compare Command: Supported 00:07:02.405 Write Uncorrectable Command: Not Supported 00:07:02.405 Dataset Management Command: Supported 00:07:02.405 Write Zeroes Command: Supported 00:07:02.405 Set Features Save Field: Supported 00:07:02.405 Reservations: Not Supported 00:07:02.405 Timestamp: Supported 00:07:02.405 Copy: Supported 00:07:02.405 Volatile Write Cache: Present 00:07:02.405 Atomic Write Unit (Normal): 1 00:07:02.405 Atomic Write Unit (PFail): 1 00:07:02.405 Atomic Compare & Write Unit: 1 00:07:02.405 Fused Compare & Write: Not Supported 00:07:02.405 Scatter-Gather List 00:07:02.405 SGL Command Set: Supported 00:07:02.405 SGL Keyed: Not Supported 00:07:02.405 SGL Bit Bucket Descriptor: Not Supported 00:07:02.405 SGL Metadata Pointer: Not Supported 00:07:02.405 Oversized SGL: Not Supported 00:07:02.405 SGL Metadata Address: Not Supported 00:07:02.405 SGL Offset: Not Supported 00:07:02.405 Transport SGL Data Block: Not Supported 00:07:02.405 Replay Protected Memory Block: Not Supported 00:07:02.405 00:07:02.405 Firmware Slot Information 00:07:02.405 ========================= 00:07:02.405 Active slot: 1 00:07:02.405 Slot 1 Firmware Revision: 1.0 00:07:02.405 00:07:02.405 00:07:02.405 Commands Supported and Effects 00:07:02.405 ============================== 00:07:02.405 Admin Commands 00:07:02.405 -------------- 00:07:02.405 Delete I/O Submission Queue (00h): Supported 00:07:02.405 Create I/O Submission Queue (01h): Supported 00:07:02.405 Get Log Page (02h): Supported 00:07:02.405 Delete I/O Completion Queue (04h): Supported 00:07:02.405 Create I/O Completion Queue (05h): Supported 00:07:02.405 Identify (06h): Supported 00:07:02.405 Abort (08h): Supported 00:07:02.405 Set Features (09h): Supported 00:07:02.405 Get Features (0Ah): Supported 00:07:02.405 Asynchronous Event Request (0Ch): Supported 00:07:02.405 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:02.405 Directive Send (19h): Supported 00:07:02.405 Directive Receive (1Ah): Supported 00:07:02.405 Virtualization Management (1Ch): Supported 00:07:02.405 Doorbell Buffer Config (7Ch): Supported 00:07:02.405 Format NVM (80h): Supported LBA-Change 00:07:02.405 I/O Commands 00:07:02.405 ------------ 00:07:02.405 Flush (00h): Supported LBA-Change 00:07:02.405 Write (01h): Supported LBA-Change 00:07:02.405 Read (02h): Supported 00:07:02.405 Compare (05h): Supported 00:07:02.405 Write Zeroes (08h): Supported LBA-Change 00:07:02.405 Dataset Management (09h): Supported LBA-Change 00:07:02.405 Unknown (0Ch): Supported 00:07:02.405 Unknown (12h): Supported 00:07:02.405 Copy (19h): Supported LBA-Change 00:07:02.405 Unknown (1Dh): Supported LBA-Change 00:07:02.405 00:07:02.405 Error Log 00:07:02.405 ========= 00:07:02.405 00:07:02.405 Arbitration 00:07:02.405 =========== 00:07:02.405 Arbitration Burst: no limit 00:07:02.405 00:07:02.405 Power Management 00:07:02.405 ================ 00:07:02.405 Number of Power States: 1 00:07:02.405 Current Power State: Power State #0 00:07:02.405 Power State #0: 00:07:02.405 Max Power: 25.00 W 00:07:02.405 Non-Operational State: Operational 00:07:02.405 Entry Latency: 16 microseconds 00:07:02.405 Exit Latency: 4 microseconds 00:07:02.405 Relative Read Throughput: 0 00:07:02.405 Relative Read Latency: 0 00:07:02.405 Relative Write Throughput: 0 00:07:02.405 Relative Write Latency: 0 00:07:02.405 Idle Power: Not Reported 00:07:02.405 Active Power: Not Reported 00:07:02.405 Non-Operational Permissive Mode: Not Supported 00:07:02.405 00:07:02.405 Health Information 00:07:02.405 ================== 00:07:02.405 Critical Warnings: 00:07:02.405 Available Spare Space: OK 00:07:02.405 Temperature: OK 00:07:02.405 Device Reliability: OK 00:07:02.405 Read Only: No 00:07:02.405 Volatile Memory Backup: OK 00:07:02.405 Current Temperature: 323 Kelvin (50 Celsius) 00:07:02.405 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:02.405 Available Spare: 0% 00:07:02.405 Available Spare Threshold: 0% 00:07:02.405 Life Percentage Used: 0% 00:07:02.405 Data Units Read: 859 00:07:02.405 Data Units Written: 788 00:07:02.405 Host Read Commands: 41888 00:07:02.405 Host Write Commands: 41311 00:07:02.405 Controller Busy Time: 0 minutes 00:07:02.405 Power Cycles: 0 00:07:02.405 Power On Hours: 0 hours 00:07:02.405 Unsafe Shutdowns: 0 00:07:02.405 Unrecoverable Media Errors: 0 00:07:02.405 Lifetime Error Log Entries: 0 00:07:02.405 Warning Temperature Time: 0 minutes 00:07:02.405 Critical Temperature Time: 0 minutes 00:07:02.405 00:07:02.405 Number of Queues 00:07:02.405 ================ 00:07:02.405 Number of I/O Submission Queues: 64 00:07:02.405 Number of I/O Completion Queues: 64 00:07:02.405 00:07:02.405 ZNS Specific Controller Data 00:07:02.405 ============================ 00:07:02.405 Zone Append Size Limit: 0 00:07:02.405 00:07:02.405 00:07:02.405 Active Namespaces 00:07:02.405 ================= 00:07:02.405 Namespace ID:1 00:07:02.405 Error Recovery Timeout: Unlimited 00:07:02.405 Command Set Identifier: NVM (00h) 00:07:02.405 Deallocate: Supported 00:07:02.405 Deallocated/Unwritten Error: Supported 00:07:02.405 Deallocated Read Value: All 0x00 00:07:02.405 Deallocate in Write Zeroes: Not Supported 00:07:02.405 Deallocated Guard Field: 0xFFFF 00:07:02.405 Flush: Supported 00:07:02.405 Reservation: Not Supported 00:07:02.405 Namespace Sharing Capabilities: Multiple Controllers 00:07:02.405 Size (in LBAs): 262144 (1GiB) 00:07:02.405 Capacity (in LBAs): 262144 (1GiB) 00:07:02.405 Utilization (in LBAs): 262144 (1GiB) 00:07:02.405 Thin Provisioning: Not Supported 00:07:02.405 Per-NS Atomic Units: No 00:07:02.405 Maximum Single Source Range Length: 128 00:07:02.405 Maximum Copy Length: 128 00:07:02.405 Maximum Source Range Count: 128 00:07:02.405 NGUID/EUI64 Never Reused: No 00:07:02.405 Namespace Write Protected: No 00:07:02.405 Endurance group ID: 1 00:07:02.405 Number of LBA Formats: 8 00:07:02.405 Current LBA Format: LBA Format #04 00:07:02.405 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:02.405 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:02.405 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:02.405 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:02.405 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:02.405 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:02.405 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:02.405 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:02.405 00:07:02.405 Get Feature FDP: 00:07:02.405 ================ 00:07:02.405 Enabled: Yes 00:07:02.405 FDP configuration index: 0 00:07:02.405 00:07:02.405 FDP configurations log page 00:07:02.405 =========================== 00:07:02.405 Number of FDP configurations: 1 00:07:02.405 Version: 0 00:07:02.406 Size: 112 00:07:02.406 FDP Configuration Descriptor: 0 00:07:02.406 Descriptor Size: 96 00:07:02.406 Reclaim Group Identifier format: 2 00:07:02.406 FDP Volatile Write Cache: Not Present 00:07:02.406 FDP Configuration: Valid 00:07:02.406 Vendor Specific Size: 0 00:07:02.406 Number of Reclaim Groups: 2 00:07:02.406 Number of Recalim Unit Handles: 8 00:07:02.406 Max Placement Identifiers: 128 00:07:02.406 Number of Namespaces Suppprted: 256 00:07:02.406 Reclaim unit Nominal Size: 6000000 bytes 00:07:02.406 Estimated Reclaim Unit Time Limit: Not Reported 00:07:02.406 RUH Desc #000: RUH Type: Initially Isolated 00:07:02.406 RUH Desc #001: RUH Type: Initially Isolated 00:07:02.406 RUH Desc #002: RUH Type: Initially Isolated 00:07:02.406 RUH Desc #003: RUH Type: Initially Isolated 00:07:02.406 RUH Desc #004: RUH Type: Initially Isolated 00:07:02.406 RUH Desc #005: RUH Type: Initially Isolated 00:07:02.406 RUH Desc #006: RUH Type: Initially Isolated 00:07:02.406 RUH Desc #007: RUH Type: Initially Isolated 00:07:02.406 00:07:02.406 FDP reclaim unit handle usage log page 00:07:02.406 ====================================== 00:07:02.406 Number of Reclaim Unit Handles: 8 00:07:02.406 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:02.406 RUH Usage Desc #001: RUH Attributes: Unused 00:07:02.406 RUH Usage Desc #002: RUH Attributes: Unused 00:07:02.406 RUH Usage Desc #003: RUH Attributes: Unused 00:07:02.406 RUH Usage Desc #004: RUH Attributes: Unused 00:07:02.406 RUH Usage Desc #005: RUH Attributes: Unused 00:07:02.406 RUH Usage Desc #006: RUH Attributes: Unused 00:07:02.406 RUH Usage Desc #007: RUH Attributes: Unused 00:07:02.406 00:07:02.406 FDP statistics log page 00:07:02.406 ======================= 00:07:02.406 Host bytes with metadata written: 504668160 00:07:02.406 Media bytes with metadata written: 504725504 00:07:02.406 Media bytes erased: 0 00:07:02.406 00:07:02.406 FDP events log page 00:07:02.406 =================== 00:07:02.406 Number of FDP events: 0 00:07:02.406 00:07:02.406 NVM Specific Namespace Data 00:07:02.406 =========================== 00:07:02.406 Logical Block Storage Tag Mask: 0 00:07:02.406 Protection Information Capabilities: 00:07:02.406 16b Guard Protection Information Storage Tag Support: No 00:07:02.406 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:02.406 Storage Tag Check Read Support: No 00:07:02.406 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.406 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.406 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.406 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.406 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.406 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.406 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.406 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:02.406 00:07:02.406 real 0m1.136s 00:07:02.406 user 0m0.383s 00:07:02.406 sys 0m0.542s 00:07:02.406 10:38:22 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.406 10:38:22 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:02.406 ************************************ 00:07:02.406 END TEST nvme_identify 00:07:02.406 ************************************ 00:07:02.406 10:38:22 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:02.406 10:38:22 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:02.406 10:38:22 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.406 10:38:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:02.406 ************************************ 00:07:02.406 START TEST nvme_perf 00:07:02.406 ************************************ 00:07:02.406 10:38:22 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:07:02.406 10:38:22 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:03.785 Initializing NVMe Controllers 00:07:03.785 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:03.785 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:03.785 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:03.785 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:03.785 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:03.785 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:03.785 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:03.785 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:03.785 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:03.785 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:03.785 Initialization complete. Launching workers. 00:07:03.785 ======================================================== 00:07:03.785 Latency(us) 00:07:03.785 Device Information : IOPS MiB/s Average min max 00:07:03.785 PCIE (0000:00:13.0) NSID 1 from core 0: 14495.73 169.87 8829.30 5647.02 22884.03 00:07:03.785 PCIE (0000:00:10.0) NSID 1 from core 0: 14495.73 169.87 8819.34 5186.29 21310.60 00:07:03.785 PCIE (0000:00:11.0) NSID 1 from core 0: 14495.73 169.87 8813.27 5092.77 20681.82 00:07:03.785 PCIE (0000:00:12.0) NSID 1 from core 0: 14495.73 169.87 8806.09 4314.52 20418.03 00:07:03.785 PCIE (0000:00:12.0) NSID 2 from core 0: 14495.73 169.87 8799.08 4107.48 20394.01 00:07:03.785 PCIE (0000:00:12.0) NSID 3 from core 0: 14495.73 169.87 8792.18 3955.69 20355.84 00:07:03.785 ======================================================== 00:07:03.785 Total : 86974.39 1019.23 8809.88 3955.69 22884.03 00:07:03.785 00:07:03.785 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:03.785 ================================================================================= 00:07:03.785 1.00000% : 6755.249us 00:07:03.785 10.00000% : 7410.609us 00:07:03.785 25.00000% : 7914.732us 00:07:03.785 50.00000% : 8519.680us 00:07:03.785 75.00000% : 9175.040us 00:07:03.785 90.00000% : 10485.760us 00:07:03.785 95.00000% : 12048.542us 00:07:03.785 98.00000% : 14014.622us 00:07:03.785 99.00000% : 16434.412us 00:07:03.785 99.50000% : 17745.132us 00:07:03.785 99.90000% : 22483.889us 00:07:03.785 99.99000% : 22887.188us 00:07:03.785 99.99900% : 22887.188us 00:07:03.785 99.99990% : 22887.188us 00:07:03.785 99.99999% : 22887.188us 00:07:03.785 00:07:03.785 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:03.785 ================================================================================= 00:07:03.785 1.00000% : 6654.425us 00:07:03.785 10.00000% : 7360.197us 00:07:03.785 25.00000% : 7864.320us 00:07:03.785 50.00000% : 8519.680us 00:07:03.785 75.00000% : 9175.040us 00:07:03.785 90.00000% : 10536.172us 00:07:03.785 95.00000% : 11947.717us 00:07:03.785 98.00000% : 13913.797us 00:07:03.785 99.00000% : 16736.886us 00:07:03.785 99.50000% : 17442.658us 00:07:03.785 99.90000% : 21072.345us 00:07:03.785 99.99000% : 21374.818us 00:07:03.785 99.99900% : 21374.818us 00:07:03.785 99.99990% : 21374.818us 00:07:03.785 99.99999% : 21374.818us 00:07:03.785 00:07:03.785 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:03.785 ================================================================================= 00:07:03.785 1.00000% : 6755.249us 00:07:03.785 10.00000% : 7360.197us 00:07:03.785 25.00000% : 7864.320us 00:07:03.785 50.00000% : 8519.680us 00:07:03.785 75.00000% : 9175.040us 00:07:03.785 90.00000% : 10536.172us 00:07:03.785 95.00000% : 12048.542us 00:07:03.785 98.00000% : 13712.148us 00:07:03.785 99.00000% : 16232.763us 00:07:03.785 99.50000% : 17341.834us 00:07:03.785 99.90000% : 20467.397us 00:07:03.785 99.99000% : 20669.046us 00:07:03.785 99.99900% : 20769.871us 00:07:03.785 99.99990% : 20769.871us 00:07:03.785 99.99999% : 20769.871us 00:07:03.785 00:07:03.785 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:03.785 ================================================================================= 00:07:03.785 1.00000% : 6704.837us 00:07:03.785 10.00000% : 7309.785us 00:07:03.785 25.00000% : 7864.320us 00:07:03.785 50.00000% : 8519.680us 00:07:03.785 75.00000% : 9175.040us 00:07:03.785 90.00000% : 10485.760us 00:07:03.785 95.00000% : 12300.603us 00:07:03.785 98.00000% : 13913.797us 00:07:03.785 99.00000% : 16131.938us 00:07:03.785 99.50000% : 18047.606us 00:07:03.785 99.90000% : 20265.748us 00:07:03.785 99.99000% : 20467.397us 00:07:03.785 99.99900% : 20467.397us 00:07:03.785 99.99990% : 20467.397us 00:07:03.785 99.99999% : 20467.397us 00:07:03.785 00:07:03.785 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:03.785 ================================================================================= 00:07:03.785 1.00000% : 6704.837us 00:07:03.785 10.00000% : 7309.785us 00:07:03.785 25.00000% : 7864.320us 00:07:03.785 50.00000% : 8519.680us 00:07:03.785 75.00000% : 9124.628us 00:07:03.785 90.00000% : 10586.585us 00:07:03.785 95.00000% : 12300.603us 00:07:03.785 98.00000% : 14216.271us 00:07:03.785 99.00000% : 15526.991us 00:07:03.785 99.50000% : 17946.782us 00:07:03.785 99.90000% : 20164.923us 00:07:03.785 99.99000% : 20467.397us 00:07:03.785 99.99900% : 20467.397us 00:07:03.785 99.99990% : 20467.397us 00:07:03.785 99.99999% : 20467.397us 00:07:03.785 00:07:03.785 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:03.785 ================================================================================= 00:07:03.785 1.00000% : 6654.425us 00:07:03.785 10.00000% : 7360.197us 00:07:03.785 25.00000% : 7914.732us 00:07:03.785 50.00000% : 8519.680us 00:07:03.785 75.00000% : 9124.628us 00:07:03.785 90.00000% : 10536.172us 00:07:03.785 95.00000% : 12199.778us 00:07:03.785 98.00000% : 14518.745us 00:07:03.785 99.00000% : 15627.815us 00:07:03.785 99.50000% : 17845.957us 00:07:03.785 99.90000% : 20164.923us 00:07:03.785 99.99000% : 20366.572us 00:07:03.785 99.99900% : 20366.572us 00:07:03.785 99.99990% : 20366.572us 00:07:03.785 99.99999% : 20366.572us 00:07:03.785 00:07:03.785 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:03.785 ============================================================================== 00:07:03.785 Range in us Cumulative IO count 00:07:03.785 5646.178 - 5671.385: 0.0206% ( 3) 00:07:03.785 5671.385 - 5696.591: 0.0344% ( 2) 00:07:03.785 5696.591 - 5721.797: 0.0413% ( 1) 00:07:03.785 5721.797 - 5747.003: 0.0551% ( 2) 00:07:03.785 5747.003 - 5772.209: 0.0688% ( 2) 00:07:03.785 5772.209 - 5797.415: 0.0826% ( 2) 00:07:03.785 5797.415 - 5822.622: 0.1032% ( 3) 00:07:03.785 5822.622 - 5847.828: 0.1239% ( 3) 00:07:03.785 5847.828 - 5873.034: 0.1377% ( 2) 00:07:03.786 5873.034 - 5898.240: 0.1514% ( 2) 00:07:03.786 5898.240 - 5923.446: 0.1652% ( 2) 00:07:03.786 5923.446 - 5948.652: 0.1790% ( 2) 00:07:03.786 5948.652 - 5973.858: 0.1927% ( 2) 00:07:03.786 5973.858 - 5999.065: 0.2134% ( 3) 00:07:03.786 5999.065 - 6024.271: 0.2203% ( 1) 00:07:03.786 6024.271 - 6049.477: 0.2340% ( 2) 00:07:03.786 6049.477 - 6074.683: 0.2547% ( 3) 00:07:03.786 6074.683 - 6099.889: 0.2684% ( 2) 00:07:03.786 6099.889 - 6125.095: 0.2822% ( 2) 00:07:03.786 6125.095 - 6150.302: 0.2960% ( 2) 00:07:03.786 6150.302 - 6175.508: 0.3097% ( 2) 00:07:03.786 6175.508 - 6200.714: 0.3304% ( 3) 00:07:03.786 6200.714 - 6225.920: 0.3442% ( 2) 00:07:03.786 6225.920 - 6251.126: 0.3579% ( 2) 00:07:03.786 6251.126 - 6276.332: 0.3717% ( 2) 00:07:03.786 6276.332 - 6301.538: 0.3923% ( 3) 00:07:03.786 6301.538 - 6326.745: 0.4061% ( 2) 00:07:03.786 6326.745 - 6351.951: 0.4199% ( 2) 00:07:03.786 6351.951 - 6377.157: 0.4336% ( 2) 00:07:03.786 6377.157 - 6402.363: 0.4405% ( 1) 00:07:03.786 6503.188 - 6553.600: 0.4956% ( 8) 00:07:03.786 6553.600 - 6604.012: 0.5369% ( 6) 00:07:03.786 6604.012 - 6654.425: 0.7778% ( 35) 00:07:03.786 6654.425 - 6704.837: 0.9499% ( 25) 00:07:03.786 6704.837 - 6755.249: 1.2459% ( 43) 00:07:03.786 6755.249 - 6805.662: 1.6795% ( 63) 00:07:03.786 6805.662 - 6856.074: 2.1063% ( 62) 00:07:03.786 6856.074 - 6906.486: 2.6501% ( 79) 00:07:03.786 6906.486 - 6956.898: 3.2558% ( 88) 00:07:03.786 6956.898 - 7007.311: 3.8340% ( 84) 00:07:03.786 7007.311 - 7057.723: 4.4741% ( 93) 00:07:03.786 7057.723 - 7108.135: 5.2106% ( 107) 00:07:03.786 7108.135 - 7158.548: 6.0848% ( 127) 00:07:03.786 7158.548 - 7208.960: 6.9246% ( 122) 00:07:03.786 7208.960 - 7259.372: 7.7987% ( 127) 00:07:03.786 7259.372 - 7309.785: 8.8312% ( 150) 00:07:03.786 7309.785 - 7360.197: 9.9601% ( 164) 00:07:03.786 7360.197 - 7410.609: 11.1853% ( 178) 00:07:03.786 7410.609 - 7461.022: 12.3899% ( 175) 00:07:03.786 7461.022 - 7511.434: 13.7941% ( 204) 00:07:03.786 7511.434 - 7561.846: 15.0606% ( 184) 00:07:03.786 7561.846 - 7612.258: 16.4854% ( 207) 00:07:03.786 7612.258 - 7662.671: 17.9034% ( 206) 00:07:03.786 7662.671 - 7713.083: 19.2938% ( 202) 00:07:03.786 7713.083 - 7763.495: 20.7048% ( 205) 00:07:03.786 7763.495 - 7813.908: 22.2192% ( 220) 00:07:03.786 7813.908 - 7864.320: 23.7817% ( 227) 00:07:03.786 7864.320 - 7914.732: 25.3166% ( 223) 00:07:03.786 7914.732 - 7965.145: 27.1820% ( 271) 00:07:03.786 7965.145 - 8015.557: 29.1781% ( 290) 00:07:03.786 8015.557 - 8065.969: 31.2156% ( 296) 00:07:03.786 8065.969 - 8116.382: 33.1842% ( 286) 00:07:03.786 8116.382 - 8166.794: 35.3111% ( 309) 00:07:03.786 8166.794 - 8217.206: 37.5482% ( 325) 00:07:03.786 8217.206 - 8267.618: 39.6407% ( 304) 00:07:03.786 8267.618 - 8318.031: 41.7952% ( 313) 00:07:03.786 8318.031 - 8368.443: 44.0873% ( 333) 00:07:03.786 8368.443 - 8418.855: 46.4689% ( 346) 00:07:03.786 8418.855 - 8469.268: 48.7954% ( 338) 00:07:03.786 8469.268 - 8519.680: 51.1357% ( 340) 00:07:03.786 8519.680 - 8570.092: 53.4416% ( 335) 00:07:03.786 8570.092 - 8620.505: 55.8852% ( 355) 00:07:03.786 8620.505 - 8670.917: 58.1635% ( 331) 00:07:03.786 8670.917 - 8721.329: 60.4075% ( 326) 00:07:03.786 8721.329 - 8771.742: 62.6170% ( 321) 00:07:03.786 8771.742 - 8822.154: 64.7233% ( 306) 00:07:03.786 8822.154 - 8872.566: 66.6919% ( 286) 00:07:03.786 8872.566 - 8922.978: 68.5297% ( 267) 00:07:03.786 8922.978 - 8973.391: 70.2437% ( 249) 00:07:03.786 8973.391 - 9023.803: 71.8681% ( 236) 00:07:03.786 9023.803 - 9074.215: 73.3274% ( 212) 00:07:03.786 9074.215 - 9124.628: 74.6627% ( 194) 00:07:03.786 9124.628 - 9175.040: 75.9499% ( 187) 00:07:03.786 9175.040 - 9225.452: 77.1820% ( 179) 00:07:03.786 9225.452 - 9275.865: 78.2902% ( 161) 00:07:03.786 9275.865 - 9326.277: 79.3296% ( 151) 00:07:03.786 9326.277 - 9376.689: 80.3345% ( 146) 00:07:03.786 9376.689 - 9427.102: 81.1399% ( 117) 00:07:03.786 9427.102 - 9477.514: 82.0209% ( 128) 00:07:03.786 9477.514 - 9527.926: 82.7506% ( 106) 00:07:03.786 9527.926 - 9578.338: 83.4871% ( 107) 00:07:03.786 9578.338 - 9628.751: 84.1547% ( 97) 00:07:03.786 9628.751 - 9679.163: 84.7605% ( 88) 00:07:03.786 9679.163 - 9729.575: 85.3111% ( 80) 00:07:03.786 9729.575 - 9779.988: 85.8687% ( 81) 00:07:03.786 9779.988 - 9830.400: 86.3780% ( 74) 00:07:03.786 9830.400 - 9880.812: 86.8599% ( 70) 00:07:03.786 9880.812 - 9931.225: 87.2591% ( 58) 00:07:03.786 9931.225 - 9981.637: 87.6032% ( 50) 00:07:03.786 9981.637 - 10032.049: 87.9130% ( 45) 00:07:03.786 10032.049 - 10082.462: 88.2572% ( 50) 00:07:03.786 10082.462 - 10132.874: 88.6082% ( 51) 00:07:03.786 10132.874 - 10183.286: 88.8767% ( 39) 00:07:03.786 10183.286 - 10233.698: 89.1107% ( 34) 00:07:03.786 10233.698 - 10284.111: 89.3172% ( 30) 00:07:03.786 10284.111 - 10334.523: 89.5306% ( 31) 00:07:03.786 10334.523 - 10384.935: 89.7371% ( 30) 00:07:03.786 10384.935 - 10435.348: 89.8816% ( 21) 00:07:03.786 10435.348 - 10485.760: 90.0537% ( 25) 00:07:03.786 10485.760 - 10536.172: 90.1776% ( 18) 00:07:03.786 10536.172 - 10586.585: 90.3153% ( 20) 00:07:03.786 10586.585 - 10636.997: 90.4873% ( 25) 00:07:03.786 10636.997 - 10687.409: 90.6388% ( 22) 00:07:03.786 10687.409 - 10737.822: 90.7902% ( 22) 00:07:03.786 10737.822 - 10788.234: 90.9141% ( 18) 00:07:03.786 10788.234 - 10838.646: 91.0380% ( 18) 00:07:03.786 10838.646 - 10889.058: 91.2032% ( 24) 00:07:03.786 10889.058 - 10939.471: 91.3546% ( 22) 00:07:03.786 10939.471 - 10989.883: 91.5061% ( 22) 00:07:03.786 10989.883 - 11040.295: 91.6781% ( 25) 00:07:03.786 11040.295 - 11090.708: 91.8778% ( 29) 00:07:03.786 11090.708 - 11141.120: 92.0774% ( 29) 00:07:03.786 11141.120 - 11191.532: 92.2701% ( 28) 00:07:03.786 11191.532 - 11241.945: 92.4697% ( 29) 00:07:03.786 11241.945 - 11292.357: 92.6900% ( 32) 00:07:03.786 11292.357 - 11342.769: 92.8965% ( 30) 00:07:03.786 11342.769 - 11393.182: 93.1236% ( 33) 00:07:03.786 11393.182 - 11443.594: 93.3026% ( 26) 00:07:03.786 11443.594 - 11494.006: 93.5160% ( 31) 00:07:03.786 11494.006 - 11544.418: 93.6812% ( 24) 00:07:03.786 11544.418 - 11594.831: 93.8188% ( 20) 00:07:03.786 11594.831 - 11645.243: 93.9771% ( 23) 00:07:03.786 11645.243 - 11695.655: 94.1423% ( 24) 00:07:03.786 11695.655 - 11746.068: 94.2662% ( 18) 00:07:03.786 11746.068 - 11796.480: 94.4108% ( 21) 00:07:03.786 11796.480 - 11846.892: 94.5209% ( 16) 00:07:03.786 11846.892 - 11897.305: 94.6517% ( 19) 00:07:03.786 11897.305 - 11947.717: 94.8031% ( 22) 00:07:03.786 11947.717 - 11998.129: 94.9339% ( 19) 00:07:03.786 11998.129 - 12048.542: 95.0509% ( 17) 00:07:03.786 12048.542 - 12098.954: 95.1473% ( 14) 00:07:03.786 12098.954 - 12149.366: 95.2781% ( 19) 00:07:03.786 12149.366 - 12199.778: 95.3676% ( 13) 00:07:03.786 12199.778 - 12250.191: 95.4639% ( 14) 00:07:03.786 12250.191 - 12300.603: 95.5603% ( 14) 00:07:03.786 12300.603 - 12351.015: 95.6567% ( 14) 00:07:03.786 12351.015 - 12401.428: 95.7599% ( 15) 00:07:03.786 12401.428 - 12451.840: 95.8907% ( 19) 00:07:03.786 12451.840 - 12502.252: 96.0008% ( 16) 00:07:03.786 12502.252 - 12552.665: 96.0972% ( 14) 00:07:03.786 12552.665 - 12603.077: 96.2211% ( 18) 00:07:03.786 12603.077 - 12653.489: 96.3381% ( 17) 00:07:03.786 12653.489 - 12703.902: 96.4620% ( 18) 00:07:03.786 12703.902 - 12754.314: 96.5653% ( 15) 00:07:03.786 12754.314 - 12804.726: 96.6754% ( 16) 00:07:03.786 12804.726 - 12855.138: 96.7649% ( 13) 00:07:03.786 12855.138 - 12905.551: 96.8681% ( 15) 00:07:03.786 12905.551 - 13006.375: 97.0058% ( 20) 00:07:03.786 13006.375 - 13107.200: 97.1503% ( 21) 00:07:03.786 13107.200 - 13208.025: 97.2673% ( 17) 00:07:03.786 13208.025 - 13308.849: 97.3499% ( 12) 00:07:03.786 13308.849 - 13409.674: 97.4463% ( 14) 00:07:03.786 13409.674 - 13510.498: 97.5564% ( 16) 00:07:03.786 13510.498 - 13611.323: 97.6666% ( 16) 00:07:03.786 13611.323 - 13712.148: 97.7836% ( 17) 00:07:03.786 13712.148 - 13812.972: 97.8868% ( 15) 00:07:03.786 13812.972 - 13913.797: 97.9557% ( 10) 00:07:03.786 13913.797 - 14014.622: 98.0107% ( 8) 00:07:03.786 14014.622 - 14115.446: 98.0452% ( 5) 00:07:03.786 14115.446 - 14216.271: 98.0727% ( 4) 00:07:03.786 14216.271 - 14317.095: 98.1002% ( 4) 00:07:03.786 14317.095 - 14417.920: 98.1346% ( 5) 00:07:03.786 14417.920 - 14518.745: 98.1553% ( 3) 00:07:03.786 14518.745 - 14619.569: 98.1897% ( 5) 00:07:03.786 14619.569 - 14720.394: 98.2172% ( 4) 00:07:03.786 14720.394 - 14821.218: 98.2379% ( 3) 00:07:03.786 14922.043 - 15022.868: 98.2448% ( 1) 00:07:03.786 15022.868 - 15123.692: 98.2723% ( 4) 00:07:03.786 15123.692 - 15224.517: 98.3067% ( 5) 00:07:03.786 15224.517 - 15325.342: 98.3411% ( 5) 00:07:03.786 15325.342 - 15426.166: 98.3756% ( 5) 00:07:03.786 15426.166 - 15526.991: 98.4100% ( 5) 00:07:03.786 15526.991 - 15627.815: 98.4650% ( 8) 00:07:03.786 15627.815 - 15728.640: 98.5407% ( 11) 00:07:03.786 15728.640 - 15829.465: 98.6165% ( 11) 00:07:03.786 15829.465 - 15930.289: 98.6922% ( 11) 00:07:03.786 15930.289 - 16031.114: 98.7748% ( 12) 00:07:03.786 16031.114 - 16131.938: 98.8436% ( 10) 00:07:03.786 16131.938 - 16232.763: 98.9124% ( 10) 00:07:03.786 16232.763 - 16333.588: 98.9813% ( 10) 00:07:03.786 16333.588 - 16434.412: 99.0501% ( 10) 00:07:03.786 16434.412 - 16535.237: 99.0914% ( 6) 00:07:03.786 16535.237 - 16636.062: 99.1534% ( 9) 00:07:03.786 16636.062 - 16736.886: 99.2015% ( 7) 00:07:03.786 16736.886 - 16837.711: 99.2635% ( 9) 00:07:03.786 16837.711 - 16938.535: 99.3048% ( 6) 00:07:03.786 16938.535 - 17039.360: 99.3323% ( 4) 00:07:03.786 17039.360 - 17140.185: 99.3599% ( 4) 00:07:03.786 17140.185 - 17241.009: 99.3874% ( 4) 00:07:03.786 17241.009 - 17341.834: 99.4080% ( 3) 00:07:03.786 17341.834 - 17442.658: 99.4356% ( 4) 00:07:03.786 17442.658 - 17543.483: 99.4700% ( 5) 00:07:03.786 17543.483 - 17644.308: 99.4975% ( 4) 00:07:03.786 17644.308 - 17745.132: 99.5319% ( 5) 00:07:03.786 17745.132 - 17845.957: 99.5595% ( 4) 00:07:03.786 21072.345 - 21173.169: 99.5801% ( 3) 00:07:03.786 21173.169 - 21273.994: 99.6077% ( 4) 00:07:03.786 21273.994 - 21374.818: 99.6283% ( 3) 00:07:03.787 21374.818 - 21475.643: 99.6558% ( 4) 00:07:03.787 21475.643 - 21576.468: 99.6765% ( 3) 00:07:03.787 21576.468 - 21677.292: 99.7040% ( 4) 00:07:03.787 21677.292 - 21778.117: 99.7316% ( 4) 00:07:03.787 21778.117 - 21878.942: 99.7591% ( 4) 00:07:03.787 21878.942 - 21979.766: 99.7797% ( 3) 00:07:03.787 21979.766 - 22080.591: 99.8073% ( 4) 00:07:03.787 22080.591 - 22181.415: 99.8348% ( 4) 00:07:03.787 22181.415 - 22282.240: 99.8555% ( 3) 00:07:03.787 22282.240 - 22383.065: 99.8830% ( 4) 00:07:03.787 22383.065 - 22483.889: 99.9105% ( 4) 00:07:03.787 22483.889 - 22584.714: 99.9381% ( 4) 00:07:03.787 22584.714 - 22685.538: 99.9587% ( 3) 00:07:03.787 22685.538 - 22786.363: 99.9794% ( 3) 00:07:03.787 22786.363 - 22887.188: 100.0000% ( 3) 00:07:03.787 00:07:03.787 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:03.787 ============================================================================== 00:07:03.787 Range in us Cumulative IO count 00:07:03.787 5167.262 - 5192.468: 0.0069% ( 1) 00:07:03.787 5192.468 - 5217.674: 0.0206% ( 2) 00:07:03.787 5217.674 - 5242.880: 0.0344% ( 2) 00:07:03.787 5242.880 - 5268.086: 0.0482% ( 2) 00:07:03.787 5268.086 - 5293.292: 0.0619% ( 2) 00:07:03.787 5293.292 - 5318.498: 0.0757% ( 2) 00:07:03.787 5318.498 - 5343.705: 0.0826% ( 1) 00:07:03.787 5343.705 - 5368.911: 0.0964% ( 2) 00:07:03.787 5368.911 - 5394.117: 0.1101% ( 2) 00:07:03.787 5394.117 - 5419.323: 0.1239% ( 2) 00:07:03.787 5419.323 - 5444.529: 0.1377% ( 2) 00:07:03.787 5444.529 - 5469.735: 0.1445% ( 1) 00:07:03.787 5469.735 - 5494.942: 0.1514% ( 1) 00:07:03.787 5494.942 - 5520.148: 0.1721% ( 3) 00:07:03.787 5520.148 - 5545.354: 0.1858% ( 2) 00:07:03.787 5545.354 - 5570.560: 0.1996% ( 2) 00:07:03.787 5570.560 - 5595.766: 0.2134% ( 2) 00:07:03.787 5595.766 - 5620.972: 0.2271% ( 2) 00:07:03.787 5620.972 - 5646.178: 0.2340% ( 1) 00:07:03.787 5646.178 - 5671.385: 0.2547% ( 3) 00:07:03.787 5696.591 - 5721.797: 0.2616% ( 1) 00:07:03.787 5721.797 - 5747.003: 0.2822% ( 3) 00:07:03.787 5747.003 - 5772.209: 0.2891% ( 1) 00:07:03.787 5772.209 - 5797.415: 0.3097% ( 3) 00:07:03.787 5797.415 - 5822.622: 0.3235% ( 2) 00:07:03.787 5822.622 - 5847.828: 0.3304% ( 1) 00:07:03.787 5847.828 - 5873.034: 0.3442% ( 2) 00:07:03.787 5873.034 - 5898.240: 0.3579% ( 2) 00:07:03.787 5898.240 - 5923.446: 0.3717% ( 2) 00:07:03.787 5923.446 - 5948.652: 0.3855% ( 2) 00:07:03.787 5948.652 - 5973.858: 0.3992% ( 2) 00:07:03.787 5973.858 - 5999.065: 0.4061% ( 1) 00:07:03.787 5999.065 - 6024.271: 0.4199% ( 2) 00:07:03.787 6024.271 - 6049.477: 0.4336% ( 2) 00:07:03.787 6049.477 - 6074.683: 0.4405% ( 1) 00:07:03.787 6402.363 - 6427.569: 0.4543% ( 2) 00:07:03.787 6427.569 - 6452.775: 0.4681% ( 2) 00:07:03.787 6452.775 - 6503.188: 0.5644% ( 14) 00:07:03.787 6503.188 - 6553.600: 0.6952% ( 19) 00:07:03.787 6553.600 - 6604.012: 0.8260% ( 19) 00:07:03.787 6604.012 - 6654.425: 1.0600% ( 34) 00:07:03.787 6654.425 - 6704.837: 1.3698% ( 45) 00:07:03.787 6704.837 - 6755.249: 1.7759% ( 59) 00:07:03.787 6755.249 - 6805.662: 2.1476% ( 54) 00:07:03.787 6805.662 - 6856.074: 2.6225% ( 69) 00:07:03.787 6856.074 - 6906.486: 3.1801% ( 81) 00:07:03.787 6906.486 - 6956.898: 3.7514% ( 83) 00:07:03.787 6956.898 - 7007.311: 4.4328% ( 99) 00:07:03.787 7007.311 - 7057.723: 5.1969% ( 111) 00:07:03.787 7057.723 - 7108.135: 6.0848% ( 129) 00:07:03.787 7108.135 - 7158.548: 6.8901% ( 117) 00:07:03.787 7158.548 - 7208.960: 7.8263% ( 136) 00:07:03.787 7208.960 - 7259.372: 8.8794% ( 153) 00:07:03.787 7259.372 - 7309.785: 9.9188% ( 151) 00:07:03.787 7309.785 - 7360.197: 11.0063% ( 158) 00:07:03.787 7360.197 - 7410.609: 12.1903% ( 172) 00:07:03.787 7410.609 - 7461.022: 13.4981% ( 190) 00:07:03.787 7461.022 - 7511.434: 14.7508% ( 182) 00:07:03.787 7511.434 - 7561.846: 16.1825% ( 208) 00:07:03.787 7561.846 - 7612.258: 17.4904% ( 190) 00:07:03.787 7612.258 - 7662.671: 19.0322% ( 224) 00:07:03.787 7662.671 - 7713.083: 20.4502% ( 206) 00:07:03.787 7713.083 - 7763.495: 22.0540% ( 233) 00:07:03.787 7763.495 - 7813.908: 23.6646% ( 234) 00:07:03.787 7813.908 - 7864.320: 25.2478% ( 230) 00:07:03.787 7864.320 - 7914.732: 26.8654% ( 235) 00:07:03.787 7914.732 - 7965.145: 28.4760% ( 234) 00:07:03.787 7965.145 - 8015.557: 30.3414% ( 271) 00:07:03.787 8015.557 - 8065.969: 32.2756% ( 281) 00:07:03.787 8065.969 - 8116.382: 34.1754% ( 276) 00:07:03.787 8116.382 - 8166.794: 36.0339% ( 270) 00:07:03.787 8166.794 - 8217.206: 38.0575% ( 294) 00:07:03.787 8217.206 - 8267.618: 40.1638% ( 306) 00:07:03.787 8267.618 - 8318.031: 42.1944% ( 295) 00:07:03.787 8318.031 - 8368.443: 44.3282% ( 310) 00:07:03.787 8368.443 - 8418.855: 46.3794% ( 298) 00:07:03.787 8418.855 - 8469.268: 48.6440% ( 329) 00:07:03.787 8469.268 - 8519.680: 50.8811% ( 325) 00:07:03.787 8519.680 - 8570.092: 52.9460% ( 300) 00:07:03.787 8570.092 - 8620.505: 55.0454% ( 305) 00:07:03.787 8620.505 - 8670.917: 57.1861% ( 311) 00:07:03.787 8670.917 - 8721.329: 59.2855% ( 305) 00:07:03.787 8721.329 - 8771.742: 61.2954% ( 292) 00:07:03.787 8771.742 - 8822.154: 63.3191% ( 294) 00:07:03.787 8822.154 - 8872.566: 65.3084% ( 289) 00:07:03.787 8872.566 - 8922.978: 67.1600% ( 269) 00:07:03.787 8922.978 - 8973.391: 68.9014% ( 253) 00:07:03.787 8973.391 - 9023.803: 70.5603% ( 241) 00:07:03.787 9023.803 - 9074.215: 72.1916% ( 237) 00:07:03.787 9074.215 - 9124.628: 73.6440% ( 211) 00:07:03.787 9124.628 - 9175.040: 75.0964% ( 211) 00:07:03.787 9175.040 - 9225.452: 76.3767% ( 186) 00:07:03.787 9225.452 - 9275.865: 77.6294% ( 182) 00:07:03.787 9275.865 - 9326.277: 78.6481% ( 148) 00:07:03.787 9326.277 - 9376.689: 79.6256% ( 142) 00:07:03.787 9376.689 - 9427.102: 80.5823% ( 139) 00:07:03.787 9427.102 - 9477.514: 81.4840% ( 131) 00:07:03.787 9477.514 - 9527.926: 82.2343% ( 109) 00:07:03.787 9527.926 - 9578.338: 82.9433% ( 103) 00:07:03.787 9578.338 - 9628.751: 83.5972% ( 95) 00:07:03.787 9628.751 - 9679.163: 84.1410% ( 79) 00:07:03.787 9679.163 - 9729.575: 84.7673% ( 91) 00:07:03.787 9729.575 - 9779.988: 85.2629% ( 72) 00:07:03.787 9779.988 - 9830.400: 85.7035% ( 64) 00:07:03.787 9830.400 - 9880.812: 86.1646% ( 67) 00:07:03.787 9880.812 - 9931.225: 86.5708% ( 59) 00:07:03.787 9931.225 - 9981.637: 86.9975% ( 62) 00:07:03.787 9981.637 - 10032.049: 87.3761% ( 55) 00:07:03.787 10032.049 - 10082.462: 87.7340% ( 52) 00:07:03.787 10082.462 - 10132.874: 88.0507% ( 46) 00:07:03.787 10132.874 - 10183.286: 88.4224% ( 54) 00:07:03.787 10183.286 - 10233.698: 88.6770% ( 37) 00:07:03.787 10233.698 - 10284.111: 88.9730% ( 43) 00:07:03.787 10284.111 - 10334.523: 89.2483% ( 40) 00:07:03.787 10334.523 - 10384.935: 89.5030% ( 37) 00:07:03.787 10384.935 - 10435.348: 89.7164% ( 31) 00:07:03.787 10435.348 - 10485.760: 89.9642% ( 36) 00:07:03.787 10485.760 - 10536.172: 90.1432% ( 26) 00:07:03.787 10536.172 - 10586.585: 90.3841% ( 35) 00:07:03.787 10586.585 - 10636.997: 90.5631% ( 26) 00:07:03.787 10636.997 - 10687.409: 90.7420% ( 26) 00:07:03.787 10687.409 - 10737.822: 90.9279% ( 27) 00:07:03.787 10737.822 - 10788.234: 91.1412% ( 31) 00:07:03.787 10788.234 - 10838.646: 91.3477% ( 30) 00:07:03.787 10838.646 - 10889.058: 91.5405% ( 28) 00:07:03.787 10889.058 - 10939.471: 91.7194% ( 26) 00:07:03.787 10939.471 - 10989.883: 91.9053% ( 27) 00:07:03.787 10989.883 - 11040.295: 92.0636% ( 23) 00:07:03.787 11040.295 - 11090.708: 92.2701% ( 30) 00:07:03.787 11090.708 - 11141.120: 92.4628% ( 28) 00:07:03.787 11141.120 - 11191.532: 92.6556% ( 28) 00:07:03.787 11191.532 - 11241.945: 92.8689% ( 31) 00:07:03.787 11241.945 - 11292.357: 93.0479% ( 26) 00:07:03.787 11292.357 - 11342.769: 93.2406% ( 28) 00:07:03.787 11342.769 - 11393.182: 93.4265% ( 27) 00:07:03.787 11393.182 - 11443.594: 93.6055% ( 26) 00:07:03.787 11443.594 - 11494.006: 93.7638% ( 23) 00:07:03.787 11494.006 - 11544.418: 93.9221% ( 23) 00:07:03.787 11544.418 - 11594.831: 94.0735% ( 22) 00:07:03.787 11594.831 - 11645.243: 94.2181% ( 21) 00:07:03.787 11645.243 - 11695.655: 94.3764% ( 23) 00:07:03.787 11695.655 - 11746.068: 94.5347% ( 23) 00:07:03.787 11746.068 - 11796.480: 94.6724% ( 20) 00:07:03.787 11796.480 - 11846.892: 94.8100% ( 20) 00:07:03.787 11846.892 - 11897.305: 94.9477% ( 20) 00:07:03.787 11897.305 - 11947.717: 95.0028% ( 8) 00:07:03.787 11947.717 - 11998.129: 95.0854% ( 12) 00:07:03.787 11998.129 - 12048.542: 95.1611% ( 11) 00:07:03.787 12048.542 - 12098.954: 95.2368% ( 11) 00:07:03.787 12098.954 - 12149.366: 95.2919% ( 8) 00:07:03.787 12149.366 - 12199.778: 95.3744% ( 12) 00:07:03.787 12199.778 - 12250.191: 95.4777% ( 15) 00:07:03.787 12250.191 - 12300.603: 95.5121% ( 5) 00:07:03.787 12300.603 - 12351.015: 95.6222% ( 16) 00:07:03.787 12351.015 - 12401.428: 95.7530% ( 19) 00:07:03.787 12401.428 - 12451.840: 95.8494% ( 14) 00:07:03.787 12451.840 - 12502.252: 95.9664% ( 17) 00:07:03.787 12502.252 - 12552.665: 96.1110% ( 21) 00:07:03.787 12552.665 - 12603.077: 96.2073% ( 14) 00:07:03.787 12603.077 - 12653.489: 96.2968% ( 13) 00:07:03.787 12653.489 - 12703.902: 96.4138% ( 17) 00:07:03.787 12703.902 - 12754.314: 96.5102% ( 14) 00:07:03.787 12754.314 - 12804.726: 96.6272% ( 17) 00:07:03.787 12804.726 - 12855.138: 96.6616% ( 5) 00:07:03.787 12855.138 - 12905.551: 96.7649% ( 15) 00:07:03.787 12905.551 - 13006.375: 96.9232% ( 23) 00:07:03.787 13006.375 - 13107.200: 97.0677% ( 21) 00:07:03.787 13107.200 - 13208.025: 97.2398% ( 25) 00:07:03.787 13208.025 - 13308.849: 97.3912% ( 22) 00:07:03.787 13308.849 - 13409.674: 97.5427% ( 22) 00:07:03.787 13409.674 - 13510.498: 97.7079% ( 24) 00:07:03.787 13510.498 - 13611.323: 97.8387% ( 19) 00:07:03.787 13611.323 - 13712.148: 97.9075% ( 10) 00:07:03.787 13712.148 - 13812.972: 97.9901% ( 12) 00:07:03.787 13812.972 - 13913.797: 98.0727% ( 12) 00:07:03.787 13913.797 - 14014.622: 98.1002% ( 4) 00:07:03.787 14014.622 - 14115.446: 98.1209% ( 3) 00:07:03.787 14115.446 - 14216.271: 98.1484% ( 4) 00:07:03.787 14216.271 - 14317.095: 98.1759% ( 4) 00:07:03.787 14317.095 - 14417.920: 98.2035% ( 4) 00:07:03.788 14417.920 - 14518.745: 98.2241% ( 3) 00:07:03.788 14518.745 - 14619.569: 98.2379% ( 2) 00:07:03.788 15325.342 - 15426.166: 98.2654% ( 4) 00:07:03.788 15426.166 - 15526.991: 98.2930% ( 4) 00:07:03.788 15526.991 - 15627.815: 98.3136% ( 3) 00:07:03.788 15627.815 - 15728.640: 98.3411% ( 4) 00:07:03.788 15728.640 - 15829.465: 98.3687% ( 4) 00:07:03.788 15829.465 - 15930.289: 98.4031% ( 5) 00:07:03.788 15930.289 - 16031.114: 98.4444% ( 6) 00:07:03.788 16031.114 - 16131.938: 98.5132% ( 10) 00:07:03.788 16131.938 - 16232.763: 98.5820% ( 10) 00:07:03.788 16232.763 - 16333.588: 98.6646% ( 12) 00:07:03.788 16333.588 - 16434.412: 98.7472% ( 12) 00:07:03.788 16434.412 - 16535.237: 98.8505% ( 15) 00:07:03.788 16535.237 - 16636.062: 98.9331% ( 12) 00:07:03.788 16636.062 - 16736.886: 99.0295% ( 14) 00:07:03.788 16736.886 - 16837.711: 99.1121% ( 12) 00:07:03.788 16837.711 - 16938.535: 99.2153% ( 15) 00:07:03.788 16938.535 - 17039.360: 99.3117% ( 14) 00:07:03.788 17039.360 - 17140.185: 99.3943% ( 12) 00:07:03.788 17140.185 - 17241.009: 99.4287% ( 5) 00:07:03.788 17241.009 - 17341.834: 99.4700% ( 6) 00:07:03.788 17341.834 - 17442.658: 99.5044% ( 5) 00:07:03.788 17442.658 - 17543.483: 99.5457% ( 6) 00:07:03.788 17543.483 - 17644.308: 99.5595% ( 2) 00:07:03.788 20164.923 - 20265.748: 99.5939% ( 5) 00:07:03.788 20265.748 - 20366.572: 99.6352% ( 6) 00:07:03.788 20366.572 - 20467.397: 99.6696% ( 5) 00:07:03.788 20467.397 - 20568.222: 99.7109% ( 6) 00:07:03.788 20568.222 - 20669.046: 99.7522% ( 6) 00:07:03.788 20669.046 - 20769.871: 99.7866% ( 5) 00:07:03.788 20769.871 - 20870.695: 99.8210% ( 5) 00:07:03.788 20870.695 - 20971.520: 99.8623% ( 6) 00:07:03.788 20971.520 - 21072.345: 99.9036% ( 6) 00:07:03.788 21072.345 - 21173.169: 99.9449% ( 6) 00:07:03.788 21173.169 - 21273.994: 99.9862% ( 6) 00:07:03.788 21273.994 - 21374.818: 100.0000% ( 2) 00:07:03.788 00:07:03.788 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:03.788 ============================================================================== 00:07:03.788 Range in us Cumulative IO count 00:07:03.788 5091.643 - 5116.849: 0.0551% ( 8) 00:07:03.788 5116.849 - 5142.055: 0.0826% ( 4) 00:07:03.788 5142.055 - 5167.262: 0.0964% ( 2) 00:07:03.788 5167.262 - 5192.468: 0.1101% ( 2) 00:07:03.788 5192.468 - 5217.674: 0.1170% ( 1) 00:07:03.788 5217.674 - 5242.880: 0.1239% ( 1) 00:07:03.788 5242.880 - 5268.086: 0.1377% ( 2) 00:07:03.788 5268.086 - 5293.292: 0.1514% ( 2) 00:07:03.788 5293.292 - 5318.498: 0.1652% ( 2) 00:07:03.788 5318.498 - 5343.705: 0.1858% ( 3) 00:07:03.788 5343.705 - 5368.911: 0.1996% ( 2) 00:07:03.788 5368.911 - 5394.117: 0.2134% ( 2) 00:07:03.788 5394.117 - 5419.323: 0.2271% ( 2) 00:07:03.788 5419.323 - 5444.529: 0.2409% ( 2) 00:07:03.788 5444.529 - 5469.735: 0.2547% ( 2) 00:07:03.788 5469.735 - 5494.942: 0.2684% ( 2) 00:07:03.788 5494.942 - 5520.148: 0.2822% ( 2) 00:07:03.788 5520.148 - 5545.354: 0.2960% ( 2) 00:07:03.788 5545.354 - 5570.560: 0.3097% ( 2) 00:07:03.788 5570.560 - 5595.766: 0.3235% ( 2) 00:07:03.788 5595.766 - 5620.972: 0.3373% ( 2) 00:07:03.788 5620.972 - 5646.178: 0.3579% ( 3) 00:07:03.788 5646.178 - 5671.385: 0.3717% ( 2) 00:07:03.788 5671.385 - 5696.591: 0.3855% ( 2) 00:07:03.788 5696.591 - 5721.797: 0.3992% ( 2) 00:07:03.788 5721.797 - 5747.003: 0.4199% ( 3) 00:07:03.788 5747.003 - 5772.209: 0.4336% ( 2) 00:07:03.788 5772.209 - 5797.415: 0.4405% ( 1) 00:07:03.788 6503.188 - 6553.600: 0.4956% ( 8) 00:07:03.788 6553.600 - 6604.012: 0.7021% ( 30) 00:07:03.788 6604.012 - 6654.425: 0.8260% ( 18) 00:07:03.788 6654.425 - 6704.837: 0.9499% ( 18) 00:07:03.788 6704.837 - 6755.249: 1.2115% ( 38) 00:07:03.788 6755.249 - 6805.662: 1.6726% ( 67) 00:07:03.788 6805.662 - 6856.074: 2.1889% ( 75) 00:07:03.788 6856.074 - 6906.486: 2.6569% ( 68) 00:07:03.788 6906.486 - 6956.898: 3.1319% ( 69) 00:07:03.788 6956.898 - 7007.311: 3.7720% ( 93) 00:07:03.788 7007.311 - 7057.723: 4.4948% ( 105) 00:07:03.788 7057.723 - 7108.135: 5.3001% ( 117) 00:07:03.788 7108.135 - 7158.548: 6.2913% ( 144) 00:07:03.788 7158.548 - 7208.960: 7.3031% ( 147) 00:07:03.788 7208.960 - 7259.372: 8.3287% ( 149) 00:07:03.788 7259.372 - 7309.785: 9.4782% ( 167) 00:07:03.788 7309.785 - 7360.197: 10.7035% ( 178) 00:07:03.788 7360.197 - 7410.609: 12.0182% ( 191) 00:07:03.788 7410.609 - 7461.022: 13.3191% ( 189) 00:07:03.788 7461.022 - 7511.434: 14.7164% ( 203) 00:07:03.788 7511.434 - 7561.846: 16.1344% ( 206) 00:07:03.788 7561.846 - 7612.258: 17.5936% ( 212) 00:07:03.788 7612.258 - 7662.671: 19.2043% ( 234) 00:07:03.788 7662.671 - 7713.083: 20.7117% ( 219) 00:07:03.788 7713.083 - 7763.495: 22.3018% ( 231) 00:07:03.788 7763.495 - 7813.908: 23.8298% ( 222) 00:07:03.788 7813.908 - 7864.320: 25.4199% ( 231) 00:07:03.788 7864.320 - 7914.732: 26.9755% ( 226) 00:07:03.788 7914.732 - 7965.145: 28.5449% ( 228) 00:07:03.788 7965.145 - 8015.557: 30.3345% ( 260) 00:07:03.788 8015.557 - 8065.969: 32.0829% ( 254) 00:07:03.788 8065.969 - 8116.382: 33.9964% ( 278) 00:07:03.788 8116.382 - 8166.794: 35.9994% ( 291) 00:07:03.788 8166.794 - 8217.206: 37.9956% ( 290) 00:07:03.788 8217.206 - 8267.618: 40.1845% ( 318) 00:07:03.788 8267.618 - 8318.031: 42.3389% ( 313) 00:07:03.788 8318.031 - 8368.443: 44.4039% ( 300) 00:07:03.788 8368.443 - 8418.855: 46.5790% ( 316) 00:07:03.788 8418.855 - 8469.268: 48.6991% ( 308) 00:07:03.788 8469.268 - 8519.680: 50.7847% ( 303) 00:07:03.788 8519.680 - 8570.092: 53.0149% ( 324) 00:07:03.788 8570.092 - 8620.505: 55.1143% ( 305) 00:07:03.788 8620.505 - 8670.917: 57.3444% ( 324) 00:07:03.788 8670.917 - 8721.329: 59.5058% ( 314) 00:07:03.788 8721.329 - 8771.742: 61.5914% ( 303) 00:07:03.788 8771.742 - 8822.154: 63.5876% ( 290) 00:07:03.788 8822.154 - 8872.566: 65.5975% ( 292) 00:07:03.788 8872.566 - 8922.978: 67.5041% ( 277) 00:07:03.788 8922.978 - 8973.391: 69.3626% ( 270) 00:07:03.788 8973.391 - 9023.803: 71.0490% ( 245) 00:07:03.788 9023.803 - 9074.215: 72.6390% ( 231) 00:07:03.788 9074.215 - 9124.628: 74.0914% ( 211) 00:07:03.788 9124.628 - 9175.040: 75.3648% ( 185) 00:07:03.788 9175.040 - 9225.452: 76.5143% ( 167) 00:07:03.788 9225.452 - 9275.865: 77.6225% ( 161) 00:07:03.788 9275.865 - 9326.277: 78.7101% ( 158) 00:07:03.788 9326.277 - 9376.689: 79.7426% ( 150) 00:07:03.788 9376.689 - 9427.102: 80.6236% ( 128) 00:07:03.788 9427.102 - 9477.514: 81.3945% ( 112) 00:07:03.788 9477.514 - 9527.926: 82.0966% ( 102) 00:07:03.788 9527.926 - 9578.338: 82.7299% ( 92) 00:07:03.788 9578.338 - 9628.751: 83.2943% ( 82) 00:07:03.788 9628.751 - 9679.163: 83.8175% ( 76) 00:07:03.788 9679.163 - 9729.575: 84.3268% ( 74) 00:07:03.788 9729.575 - 9779.988: 84.8362% ( 74) 00:07:03.788 9779.988 - 9830.400: 85.3662% ( 77) 00:07:03.788 9830.400 - 9880.812: 85.8274% ( 67) 00:07:03.788 9880.812 - 9931.225: 86.2335% ( 59) 00:07:03.788 9931.225 - 9981.637: 86.5708% ( 49) 00:07:03.788 9981.637 - 10032.049: 86.9562% ( 56) 00:07:03.788 10032.049 - 10082.462: 87.3210% ( 53) 00:07:03.788 10082.462 - 10132.874: 87.7340% ( 60) 00:07:03.788 10132.874 - 10183.286: 88.0231% ( 42) 00:07:03.788 10183.286 - 10233.698: 88.3191% ( 43) 00:07:03.788 10233.698 - 10284.111: 88.6082% ( 42) 00:07:03.788 10284.111 - 10334.523: 88.9111% ( 44) 00:07:03.788 10334.523 - 10384.935: 89.1795% ( 39) 00:07:03.788 10384.935 - 10435.348: 89.4411% ( 38) 00:07:03.788 10435.348 - 10485.760: 89.7164% ( 40) 00:07:03.788 10485.760 - 10536.172: 90.0193% ( 44) 00:07:03.788 10536.172 - 10586.585: 90.2946% ( 40) 00:07:03.788 10586.585 - 10636.997: 90.5493% ( 37) 00:07:03.788 10636.997 - 10687.409: 90.8040% ( 37) 00:07:03.788 10687.409 - 10737.822: 90.9898% ( 27) 00:07:03.788 10737.822 - 10788.234: 91.1963% ( 30) 00:07:03.788 10788.234 - 10838.646: 91.3615% ( 24) 00:07:03.788 10838.646 - 10889.058: 91.5198% ( 23) 00:07:03.788 10889.058 - 10939.471: 91.6713% ( 22) 00:07:03.788 10939.471 - 10989.883: 91.7814% ( 16) 00:07:03.788 10989.883 - 11040.295: 91.9053% ( 18) 00:07:03.788 11040.295 - 11090.708: 92.0085% ( 15) 00:07:03.788 11090.708 - 11141.120: 92.1187% ( 16) 00:07:03.788 11141.120 - 11191.532: 92.2426% ( 18) 00:07:03.788 11191.532 - 11241.945: 92.4146% ( 25) 00:07:03.788 11241.945 - 11292.357: 92.5730% ( 23) 00:07:03.788 11292.357 - 11342.769: 92.7175% ( 21) 00:07:03.788 11342.769 - 11393.182: 92.8965% ( 26) 00:07:03.788 11393.182 - 11443.594: 93.0892% ( 28) 00:07:03.788 11443.594 - 11494.006: 93.3095% ( 32) 00:07:03.788 11494.006 - 11544.418: 93.4953% ( 27) 00:07:03.788 11544.418 - 11594.831: 93.6674% ( 25) 00:07:03.788 11594.831 - 11645.243: 93.8326% ( 24) 00:07:03.788 11645.243 - 11695.655: 93.9978% ( 24) 00:07:03.788 11695.655 - 11746.068: 94.1630% ( 24) 00:07:03.788 11746.068 - 11796.480: 94.3282% ( 24) 00:07:03.788 11796.480 - 11846.892: 94.4659% ( 20) 00:07:03.788 11846.892 - 11897.305: 94.5966% ( 19) 00:07:03.788 11897.305 - 11947.717: 94.7205% ( 18) 00:07:03.788 11947.717 - 11998.129: 94.8651% ( 21) 00:07:03.788 11998.129 - 12048.542: 95.0028% ( 20) 00:07:03.788 12048.542 - 12098.954: 95.1267% ( 18) 00:07:03.788 12098.954 - 12149.366: 95.2574% ( 19) 00:07:03.788 12149.366 - 12199.778: 95.3676% ( 16) 00:07:03.788 12199.778 - 12250.191: 95.4570% ( 13) 00:07:03.788 12250.191 - 12300.603: 95.5741% ( 17) 00:07:03.788 12300.603 - 12351.015: 95.6567% ( 12) 00:07:03.788 12351.015 - 12401.428: 95.7393% ( 12) 00:07:03.788 12401.428 - 12451.840: 95.8219% ( 12) 00:07:03.788 12451.840 - 12502.252: 95.9251% ( 15) 00:07:03.788 12502.252 - 12552.665: 96.0146% ( 13) 00:07:03.788 12552.665 - 12603.077: 96.1110% ( 14) 00:07:03.788 12603.077 - 12653.489: 96.2073% ( 14) 00:07:03.788 12653.489 - 12703.902: 96.3175% ( 16) 00:07:03.788 12703.902 - 12754.314: 96.4207% ( 15) 00:07:03.788 12754.314 - 12804.726: 96.5377% ( 17) 00:07:03.788 12804.726 - 12855.138: 96.6341% ( 14) 00:07:03.788 12855.138 - 12905.551: 96.7442% ( 16) 00:07:03.788 12905.551 - 13006.375: 96.9576% ( 31) 00:07:03.788 13006.375 - 13107.200: 97.1090% ( 22) 00:07:03.788 13107.200 - 13208.025: 97.2742% ( 24) 00:07:03.789 13208.025 - 13308.849: 97.4670% ( 28) 00:07:03.789 13308.849 - 13409.674: 97.6459% ( 26) 00:07:03.789 13409.674 - 13510.498: 97.8180% ( 25) 00:07:03.789 13510.498 - 13611.323: 97.9901% ( 25) 00:07:03.789 13611.323 - 13712.148: 98.0865% ( 14) 00:07:03.789 13712.148 - 13812.972: 98.1828% ( 14) 00:07:03.789 13812.972 - 13913.797: 98.2448% ( 9) 00:07:03.789 13913.797 - 14014.622: 98.3136% ( 10) 00:07:03.789 14014.622 - 14115.446: 98.3687% ( 8) 00:07:03.789 14115.446 - 14216.271: 98.3962% ( 4) 00:07:03.789 14216.271 - 14317.095: 98.4306% ( 5) 00:07:03.789 14317.095 - 14417.920: 98.4581% ( 4) 00:07:03.789 14417.920 - 14518.745: 98.4926% ( 5) 00:07:03.789 14518.745 - 14619.569: 98.5201% ( 4) 00:07:03.789 14619.569 - 14720.394: 98.5476% ( 4) 00:07:03.789 14720.394 - 14821.218: 98.5820% ( 5) 00:07:03.789 14821.218 - 14922.043: 98.6096% ( 4) 00:07:03.789 14922.043 - 15022.868: 98.6440% ( 5) 00:07:03.789 15022.868 - 15123.692: 98.6715% ( 4) 00:07:03.789 15123.692 - 15224.517: 98.6784% ( 1) 00:07:03.789 15426.166 - 15526.991: 98.6922% ( 2) 00:07:03.789 15526.991 - 15627.815: 98.7404% ( 7) 00:07:03.789 15627.815 - 15728.640: 98.7817% ( 6) 00:07:03.789 15728.640 - 15829.465: 98.8230% ( 6) 00:07:03.789 15829.465 - 15930.289: 98.8711% ( 7) 00:07:03.789 15930.289 - 16031.114: 98.8987% ( 4) 00:07:03.789 16031.114 - 16131.938: 98.9675% ( 10) 00:07:03.789 16131.938 - 16232.763: 99.0363% ( 10) 00:07:03.789 16232.763 - 16333.588: 99.1052% ( 10) 00:07:03.789 16333.588 - 16434.412: 99.1809% ( 11) 00:07:03.789 16434.412 - 16535.237: 99.2635% ( 12) 00:07:03.789 16535.237 - 16636.062: 99.3048% ( 6) 00:07:03.789 16636.062 - 16736.886: 99.3392% ( 5) 00:07:03.789 16736.886 - 16837.711: 99.3667% ( 4) 00:07:03.789 16837.711 - 16938.535: 99.4012% ( 5) 00:07:03.789 16938.535 - 17039.360: 99.4287% ( 4) 00:07:03.789 17039.360 - 17140.185: 99.4631% ( 5) 00:07:03.789 17140.185 - 17241.009: 99.4975% ( 5) 00:07:03.789 17241.009 - 17341.834: 99.5251% ( 4) 00:07:03.789 17341.834 - 17442.658: 99.5595% ( 5) 00:07:03.789 19660.800 - 19761.625: 99.6008% ( 6) 00:07:03.789 19761.625 - 19862.449: 99.6421% ( 6) 00:07:03.789 19862.449 - 19963.274: 99.6834% ( 6) 00:07:03.789 19963.274 - 20064.098: 99.7316% ( 7) 00:07:03.789 20064.098 - 20164.923: 99.7729% ( 6) 00:07:03.789 20164.923 - 20265.748: 99.8142% ( 6) 00:07:03.789 20265.748 - 20366.572: 99.8555% ( 6) 00:07:03.789 20366.572 - 20467.397: 99.9036% ( 7) 00:07:03.789 20467.397 - 20568.222: 99.9449% ( 6) 00:07:03.789 20568.222 - 20669.046: 99.9931% ( 7) 00:07:03.789 20669.046 - 20769.871: 100.0000% ( 1) 00:07:03.789 00:07:03.789 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:03.789 ============================================================================== 00:07:03.789 Range in us Cumulative IO count 00:07:03.789 4310.252 - 4335.458: 0.0275% ( 4) 00:07:03.789 4335.458 - 4360.665: 0.0344% ( 1) 00:07:03.789 4360.665 - 4385.871: 0.0482% ( 2) 00:07:03.789 4385.871 - 4411.077: 0.0619% ( 2) 00:07:03.789 4411.077 - 4436.283: 0.0757% ( 2) 00:07:03.789 4436.283 - 4461.489: 0.0895% ( 2) 00:07:03.789 4461.489 - 4486.695: 0.1101% ( 3) 00:07:03.789 4486.695 - 4511.902: 0.1308% ( 3) 00:07:03.789 4511.902 - 4537.108: 0.1445% ( 2) 00:07:03.789 4537.108 - 4562.314: 0.1583% ( 2) 00:07:03.789 4562.314 - 4587.520: 0.1721% ( 2) 00:07:03.789 4587.520 - 4612.726: 0.1858% ( 2) 00:07:03.789 4612.726 - 4637.932: 0.1996% ( 2) 00:07:03.789 4637.932 - 4663.138: 0.2134% ( 2) 00:07:03.789 4663.138 - 4688.345: 0.2271% ( 2) 00:07:03.789 4688.345 - 4713.551: 0.2409% ( 2) 00:07:03.789 4713.551 - 4738.757: 0.2547% ( 2) 00:07:03.789 4738.757 - 4763.963: 0.2753% ( 3) 00:07:03.789 4763.963 - 4789.169: 0.2891% ( 2) 00:07:03.789 4789.169 - 4814.375: 0.3029% ( 2) 00:07:03.789 4814.375 - 4839.582: 0.3166% ( 2) 00:07:03.789 4839.582 - 4864.788: 0.3304% ( 2) 00:07:03.789 4864.788 - 4889.994: 0.3510% ( 3) 00:07:03.789 4889.994 - 4915.200: 0.3648% ( 2) 00:07:03.789 4915.200 - 4940.406: 0.3786% ( 2) 00:07:03.789 4940.406 - 4965.612: 0.3923% ( 2) 00:07:03.789 4965.612 - 4990.818: 0.4061% ( 2) 00:07:03.789 4990.818 - 5016.025: 0.4268% ( 3) 00:07:03.789 5016.025 - 5041.231: 0.4336% ( 1) 00:07:03.789 5041.231 - 5066.437: 0.4405% ( 1) 00:07:03.789 6326.745 - 6351.951: 0.4474% ( 1) 00:07:03.789 6351.951 - 6377.157: 0.4681% ( 3) 00:07:03.789 6377.157 - 6402.363: 0.4818% ( 2) 00:07:03.789 6402.363 - 6427.569: 0.4956% ( 2) 00:07:03.789 6427.569 - 6452.775: 0.5025% ( 1) 00:07:03.789 6452.775 - 6503.188: 0.5300% ( 4) 00:07:03.789 6503.188 - 6553.600: 0.5644% ( 5) 00:07:03.789 6553.600 - 6604.012: 0.6470% ( 12) 00:07:03.789 6604.012 - 6654.425: 0.7985% ( 22) 00:07:03.789 6654.425 - 6704.837: 1.0394% ( 35) 00:07:03.789 6704.837 - 6755.249: 1.4042% ( 53) 00:07:03.789 6755.249 - 6805.662: 1.8998% ( 72) 00:07:03.789 6805.662 - 6856.074: 2.4436% ( 79) 00:07:03.789 6856.074 - 6906.486: 3.1044% ( 96) 00:07:03.789 6906.486 - 6956.898: 3.8890% ( 114) 00:07:03.789 6956.898 - 7007.311: 4.6462% ( 110) 00:07:03.789 7007.311 - 7057.723: 5.4584% ( 118) 00:07:03.789 7057.723 - 7108.135: 6.3119% ( 124) 00:07:03.789 7108.135 - 7158.548: 7.1517% ( 122) 00:07:03.789 7158.548 - 7208.960: 8.0809% ( 135) 00:07:03.789 7208.960 - 7259.372: 9.0859% ( 146) 00:07:03.789 7259.372 - 7309.785: 10.1803% ( 159) 00:07:03.789 7309.785 - 7360.197: 11.2954% ( 162) 00:07:03.789 7360.197 - 7410.609: 12.4931% ( 174) 00:07:03.789 7410.609 - 7461.022: 13.8422% ( 196) 00:07:03.789 7461.022 - 7511.434: 15.2602% ( 206) 00:07:03.789 7511.434 - 7561.846: 16.6988% ( 209) 00:07:03.789 7561.846 - 7612.258: 18.2682% ( 228) 00:07:03.789 7612.258 - 7662.671: 19.7894% ( 221) 00:07:03.789 7662.671 - 7713.083: 21.3450% ( 226) 00:07:03.789 7713.083 - 7763.495: 22.7767% ( 208) 00:07:03.789 7763.495 - 7813.908: 24.2015% ( 207) 00:07:03.789 7813.908 - 7864.320: 25.6677% ( 213) 00:07:03.789 7864.320 - 7914.732: 27.1545% ( 216) 00:07:03.789 7914.732 - 7965.145: 28.7514% ( 232) 00:07:03.789 7965.145 - 8015.557: 30.3965% ( 239) 00:07:03.789 8015.557 - 8065.969: 32.1311% ( 252) 00:07:03.789 8065.969 - 8116.382: 33.8588% ( 251) 00:07:03.789 8116.382 - 8166.794: 35.6966% ( 267) 00:07:03.789 8166.794 - 8217.206: 37.7340% ( 296) 00:07:03.789 8217.206 - 8267.618: 39.8059% ( 301) 00:07:03.789 8267.618 - 8318.031: 41.9191% ( 307) 00:07:03.789 8318.031 - 8368.443: 43.9634% ( 297) 00:07:03.789 8368.443 - 8418.855: 46.1041% ( 311) 00:07:03.789 8418.855 - 8469.268: 48.2241% ( 308) 00:07:03.789 8469.268 - 8519.680: 50.3166% ( 304) 00:07:03.789 8519.680 - 8570.092: 52.4986% ( 317) 00:07:03.789 8570.092 - 8620.505: 54.6118% ( 307) 00:07:03.789 8620.505 - 8670.917: 56.7456% ( 310) 00:07:03.789 8670.917 - 8721.329: 58.7624% ( 293) 00:07:03.789 8721.329 - 8771.742: 60.9719% ( 321) 00:07:03.789 8771.742 - 8822.154: 62.9543% ( 288) 00:07:03.789 8822.154 - 8872.566: 64.9642% ( 292) 00:07:03.789 8872.566 - 8922.978: 66.8984% ( 281) 00:07:03.789 8922.978 - 8973.391: 68.7294% ( 266) 00:07:03.789 8973.391 - 9023.803: 70.5465% ( 264) 00:07:03.789 9023.803 - 9074.215: 72.2811% ( 252) 00:07:03.789 9074.215 - 9124.628: 73.8987% ( 235) 00:07:03.789 9124.628 - 9175.040: 75.3579% ( 212) 00:07:03.789 9175.040 - 9225.452: 76.6520% ( 188) 00:07:03.789 9225.452 - 9275.865: 77.8428% ( 173) 00:07:03.789 9275.865 - 9326.277: 78.9854% ( 166) 00:07:03.789 9326.277 - 9376.689: 80.0661% ( 157) 00:07:03.789 9376.689 - 9427.102: 81.0229% ( 139) 00:07:03.789 9427.102 - 9477.514: 81.8626% ( 122) 00:07:03.789 9477.514 - 9527.926: 82.6198% ( 110) 00:07:03.789 9527.926 - 9578.338: 83.2737% ( 95) 00:07:03.789 9578.338 - 9628.751: 83.9207% ( 94) 00:07:03.789 9628.751 - 9679.163: 84.4989% ( 84) 00:07:03.789 9679.163 - 9729.575: 84.9876% ( 71) 00:07:03.789 9729.575 - 9779.988: 85.4419% ( 66) 00:07:03.789 9779.988 - 9830.400: 85.8687% ( 62) 00:07:03.789 9830.400 - 9880.812: 86.2472% ( 55) 00:07:03.789 9880.812 - 9931.225: 86.6327% ( 56) 00:07:03.789 9931.225 - 9981.637: 87.0251% ( 57) 00:07:03.789 9981.637 - 10032.049: 87.3486% ( 47) 00:07:03.789 10032.049 - 10082.462: 87.7271% ( 55) 00:07:03.789 10082.462 - 10132.874: 88.1057% ( 55) 00:07:03.790 10132.874 - 10183.286: 88.3879% ( 41) 00:07:03.790 10183.286 - 10233.698: 88.7321% ( 50) 00:07:03.790 10233.698 - 10284.111: 89.0487% ( 46) 00:07:03.790 10284.111 - 10334.523: 89.3172% ( 39) 00:07:03.790 10334.523 - 10384.935: 89.6200% ( 44) 00:07:03.790 10384.935 - 10435.348: 89.8610% ( 35) 00:07:03.790 10435.348 - 10485.760: 90.1156% ( 37) 00:07:03.790 10485.760 - 10536.172: 90.3910% ( 40) 00:07:03.790 10536.172 - 10586.585: 90.6525% ( 38) 00:07:03.790 10586.585 - 10636.997: 90.9072% ( 37) 00:07:03.790 10636.997 - 10687.409: 91.1481% ( 35) 00:07:03.790 10687.409 - 10737.822: 91.3684% ( 32) 00:07:03.790 10737.822 - 10788.234: 91.5818% ( 31) 00:07:03.790 10788.234 - 10838.646: 91.7952% ( 31) 00:07:03.790 10838.646 - 10889.058: 91.9741% ( 26) 00:07:03.790 10889.058 - 10939.471: 92.1393% ( 24) 00:07:03.790 10939.471 - 10989.883: 92.2907% ( 22) 00:07:03.790 10989.883 - 11040.295: 92.4078% ( 17) 00:07:03.790 11040.295 - 11090.708: 92.5248% ( 17) 00:07:03.790 11090.708 - 11141.120: 92.5936% ( 10) 00:07:03.790 11141.120 - 11191.532: 92.6900% ( 14) 00:07:03.790 11191.532 - 11241.945: 92.7726% ( 12) 00:07:03.790 11241.945 - 11292.357: 92.8689% ( 14) 00:07:03.790 11292.357 - 11342.769: 92.9584% ( 13) 00:07:03.790 11342.769 - 11393.182: 93.0617% ( 15) 00:07:03.790 11393.182 - 11443.594: 93.1649% ( 15) 00:07:03.790 11443.594 - 11494.006: 93.2819% ( 17) 00:07:03.790 11494.006 - 11544.418: 93.3990% ( 17) 00:07:03.790 11544.418 - 11594.831: 93.5229% ( 18) 00:07:03.790 11594.831 - 11645.243: 93.6055% ( 12) 00:07:03.790 11645.243 - 11695.655: 93.7087% ( 15) 00:07:03.790 11695.655 - 11746.068: 93.7913% ( 12) 00:07:03.790 11746.068 - 11796.480: 93.8877% ( 14) 00:07:03.790 11796.480 - 11846.892: 93.9978% ( 16) 00:07:03.790 11846.892 - 11897.305: 94.1217% ( 18) 00:07:03.790 11897.305 - 11947.717: 94.2456% ( 18) 00:07:03.790 11947.717 - 11998.129: 94.3695% ( 18) 00:07:03.790 11998.129 - 12048.542: 94.4934% ( 18) 00:07:03.790 12048.542 - 12098.954: 94.6173% ( 18) 00:07:03.790 12098.954 - 12149.366: 94.7274% ( 16) 00:07:03.790 12149.366 - 12199.778: 94.8720% ( 21) 00:07:03.790 12199.778 - 12250.191: 94.9890% ( 17) 00:07:03.790 12250.191 - 12300.603: 95.1404% ( 22) 00:07:03.790 12300.603 - 12351.015: 95.2712% ( 19) 00:07:03.790 12351.015 - 12401.428: 95.3951% ( 18) 00:07:03.790 12401.428 - 12451.840: 95.5190% ( 18) 00:07:03.790 12451.840 - 12502.252: 95.6635% ( 21) 00:07:03.790 12502.252 - 12552.665: 95.8012% ( 20) 00:07:03.790 12552.665 - 12603.077: 95.9458% ( 21) 00:07:03.790 12603.077 - 12653.489: 96.0765% ( 19) 00:07:03.790 12653.489 - 12703.902: 96.1729% ( 14) 00:07:03.790 12703.902 - 12754.314: 96.2486% ( 11) 00:07:03.790 12754.314 - 12804.726: 96.3588% ( 16) 00:07:03.790 12804.726 - 12855.138: 96.4689% ( 16) 00:07:03.790 12855.138 - 12905.551: 96.5584% ( 13) 00:07:03.790 12905.551 - 13006.375: 96.7855% ( 33) 00:07:03.790 13006.375 - 13107.200: 97.0195% ( 34) 00:07:03.790 13107.200 - 13208.025: 97.2123% ( 28) 00:07:03.790 13208.025 - 13308.849: 97.3706% ( 23) 00:07:03.790 13308.849 - 13409.674: 97.5289% ( 23) 00:07:03.790 13409.674 - 13510.498: 97.6184% ( 13) 00:07:03.790 13510.498 - 13611.323: 97.7010% ( 12) 00:07:03.790 13611.323 - 13712.148: 97.8111% ( 16) 00:07:03.790 13712.148 - 13812.972: 97.9419% ( 19) 00:07:03.790 13812.972 - 13913.797: 98.0314% ( 13) 00:07:03.790 13913.797 - 14014.622: 98.1278% ( 14) 00:07:03.790 14014.622 - 14115.446: 98.2241% ( 14) 00:07:03.790 14115.446 - 14216.271: 98.3205% ( 14) 00:07:03.790 14216.271 - 14317.095: 98.3824% ( 9) 00:07:03.790 14317.095 - 14417.920: 98.4444% ( 9) 00:07:03.790 14417.920 - 14518.745: 98.4994% ( 8) 00:07:03.790 14518.745 - 14619.569: 98.5476% ( 7) 00:07:03.790 14619.569 - 14720.394: 98.5752% ( 4) 00:07:03.790 14720.394 - 14821.218: 98.6096% ( 5) 00:07:03.790 14821.218 - 14922.043: 98.6440% ( 5) 00:07:03.790 14922.043 - 15022.868: 98.6784% ( 5) 00:07:03.790 15224.517 - 15325.342: 98.6853% ( 1) 00:07:03.790 15325.342 - 15426.166: 98.7266% ( 6) 00:07:03.790 15426.166 - 15526.991: 98.7748% ( 7) 00:07:03.790 15526.991 - 15627.815: 98.8230% ( 7) 00:07:03.790 15627.815 - 15728.640: 98.8643% ( 6) 00:07:03.790 15728.640 - 15829.465: 98.9056% ( 6) 00:07:03.790 15829.465 - 15930.289: 98.9469% ( 6) 00:07:03.790 15930.289 - 16031.114: 98.9813% ( 5) 00:07:03.790 16031.114 - 16131.938: 99.0226% ( 6) 00:07:03.790 16131.938 - 16232.763: 99.0708% ( 7) 00:07:03.790 16232.763 - 16333.588: 99.1121% ( 6) 00:07:03.790 16333.588 - 16434.412: 99.1189% ( 1) 00:07:03.790 16636.062 - 16736.886: 99.1327% ( 2) 00:07:03.790 16736.886 - 16837.711: 99.1534% ( 3) 00:07:03.790 16837.711 - 16938.535: 99.1878% ( 5) 00:07:03.790 16938.535 - 17039.360: 99.2153% ( 4) 00:07:03.790 17039.360 - 17140.185: 99.2497% ( 5) 00:07:03.790 17140.185 - 17241.009: 99.2773% ( 4) 00:07:03.790 17241.009 - 17341.834: 99.3117% ( 5) 00:07:03.790 17341.834 - 17442.658: 99.3392% ( 4) 00:07:03.790 17442.658 - 17543.483: 99.3736% ( 5) 00:07:03.790 17543.483 - 17644.308: 99.4012% ( 4) 00:07:03.790 17644.308 - 17745.132: 99.4287% ( 4) 00:07:03.790 17745.132 - 17845.957: 99.4631% ( 5) 00:07:03.790 17845.957 - 17946.782: 99.4906% ( 4) 00:07:03.790 17946.782 - 18047.606: 99.5251% ( 5) 00:07:03.790 18047.606 - 18148.431: 99.5595% ( 5) 00:07:03.790 19358.326 - 19459.151: 99.5870% ( 4) 00:07:03.790 19459.151 - 19559.975: 99.6352% ( 7) 00:07:03.790 19559.975 - 19660.800: 99.6765% ( 6) 00:07:03.790 19660.800 - 19761.625: 99.7178% ( 6) 00:07:03.790 19761.625 - 19862.449: 99.7591% ( 6) 00:07:03.790 19862.449 - 19963.274: 99.8004% ( 6) 00:07:03.790 19963.274 - 20064.098: 99.8486% ( 7) 00:07:03.790 20064.098 - 20164.923: 99.8899% ( 6) 00:07:03.790 20164.923 - 20265.748: 99.9312% ( 6) 00:07:03.790 20265.748 - 20366.572: 99.9725% ( 6) 00:07:03.790 20366.572 - 20467.397: 100.0000% ( 4) 00:07:03.790 00:07:03.790 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:03.790 ============================================================================== 00:07:03.790 Range in us Cumulative IO count 00:07:03.790 4083.397 - 4108.603: 0.0069% ( 1) 00:07:03.790 4108.603 - 4133.809: 0.0206% ( 2) 00:07:03.790 4133.809 - 4159.015: 0.0344% ( 2) 00:07:03.790 4159.015 - 4184.222: 0.0482% ( 2) 00:07:03.790 4184.222 - 4209.428: 0.0619% ( 2) 00:07:03.790 4209.428 - 4234.634: 0.0688% ( 1) 00:07:03.790 4234.634 - 4259.840: 0.0757% ( 1) 00:07:03.790 4259.840 - 4285.046: 0.0964% ( 3) 00:07:03.790 4285.046 - 4310.252: 0.1101% ( 2) 00:07:03.790 4310.252 - 4335.458: 0.1239% ( 2) 00:07:03.790 4335.458 - 4360.665: 0.1377% ( 2) 00:07:03.790 4360.665 - 4385.871: 0.1514% ( 2) 00:07:03.790 4385.871 - 4411.077: 0.1652% ( 2) 00:07:03.790 4411.077 - 4436.283: 0.1790% ( 2) 00:07:03.790 4436.283 - 4461.489: 0.1927% ( 2) 00:07:03.790 4461.489 - 4486.695: 0.2065% ( 2) 00:07:03.790 4486.695 - 4511.902: 0.2271% ( 3) 00:07:03.790 4511.902 - 4537.108: 0.2409% ( 2) 00:07:03.790 4537.108 - 4562.314: 0.2547% ( 2) 00:07:03.790 4562.314 - 4587.520: 0.2684% ( 2) 00:07:03.790 4587.520 - 4612.726: 0.2822% ( 2) 00:07:03.790 4612.726 - 4637.932: 0.2960% ( 2) 00:07:03.790 4637.932 - 4663.138: 0.3097% ( 2) 00:07:03.790 4663.138 - 4688.345: 0.3235% ( 2) 00:07:03.790 4688.345 - 4713.551: 0.3373% ( 2) 00:07:03.790 4713.551 - 4738.757: 0.3510% ( 2) 00:07:03.790 4738.757 - 4763.963: 0.3648% ( 2) 00:07:03.790 4763.963 - 4789.169: 0.3786% ( 2) 00:07:03.790 4789.169 - 4814.375: 0.3992% ( 3) 00:07:03.790 4814.375 - 4839.582: 0.4130% ( 2) 00:07:03.790 4839.582 - 4864.788: 0.4268% ( 2) 00:07:03.790 4864.788 - 4889.994: 0.4405% ( 2) 00:07:03.790 6553.600 - 6604.012: 0.5920% ( 22) 00:07:03.790 6604.012 - 6654.425: 0.7709% ( 26) 00:07:03.790 6654.425 - 6704.837: 1.0394% ( 39) 00:07:03.790 6704.837 - 6755.249: 1.4455% ( 59) 00:07:03.790 6755.249 - 6805.662: 1.9135% ( 68) 00:07:03.790 6805.662 - 6856.074: 2.4023% ( 71) 00:07:03.790 6856.074 - 6906.486: 2.9185% ( 75) 00:07:03.790 6906.486 - 6956.898: 3.6137% ( 101) 00:07:03.790 6956.898 - 7007.311: 4.3158% ( 102) 00:07:03.790 7007.311 - 7057.723: 5.1556% ( 122) 00:07:03.790 7057.723 - 7108.135: 5.9953% ( 122) 00:07:03.790 7108.135 - 7158.548: 7.0072% ( 147) 00:07:03.790 7158.548 - 7208.960: 8.0121% ( 146) 00:07:03.790 7208.960 - 7259.372: 9.0377% ( 149) 00:07:03.790 7259.372 - 7309.785: 10.1666% ( 164) 00:07:03.790 7309.785 - 7360.197: 11.4469% ( 186) 00:07:03.790 7360.197 - 7410.609: 12.6445% ( 174) 00:07:03.790 7410.609 - 7461.022: 13.8698% ( 178) 00:07:03.790 7461.022 - 7511.434: 15.2051% ( 194) 00:07:03.790 7511.434 - 7561.846: 16.4785% ( 185) 00:07:03.790 7561.846 - 7612.258: 17.9034% ( 207) 00:07:03.790 7612.258 - 7662.671: 19.3764% ( 214) 00:07:03.790 7662.671 - 7713.083: 20.8287% ( 211) 00:07:03.790 7713.083 - 7763.495: 22.2467% ( 206) 00:07:03.790 7763.495 - 7813.908: 23.7541% ( 219) 00:07:03.790 7813.908 - 7864.320: 25.2891% ( 223) 00:07:03.790 7864.320 - 7914.732: 26.8447% ( 226) 00:07:03.790 7914.732 - 7965.145: 28.4141% ( 228) 00:07:03.790 7965.145 - 8015.557: 30.0523% ( 238) 00:07:03.790 8015.557 - 8065.969: 31.8282% ( 258) 00:07:03.790 8065.969 - 8116.382: 33.6729% ( 268) 00:07:03.790 8116.382 - 8166.794: 35.5796% ( 277) 00:07:03.790 8166.794 - 8217.206: 37.5069% ( 280) 00:07:03.790 8217.206 - 8267.618: 39.5168% ( 292) 00:07:03.790 8267.618 - 8318.031: 41.7057% ( 318) 00:07:03.790 8318.031 - 8368.443: 43.9290% ( 323) 00:07:03.790 8368.443 - 8418.855: 46.2486% ( 337) 00:07:03.790 8418.855 - 8469.268: 48.4857% ( 325) 00:07:03.790 8469.268 - 8519.680: 50.7709% ( 332) 00:07:03.790 8519.680 - 8570.092: 52.9942% ( 323) 00:07:03.790 8570.092 - 8620.505: 55.2932% ( 334) 00:07:03.790 8620.505 - 8670.917: 57.5647% ( 330) 00:07:03.790 8670.917 - 8721.329: 59.7673% ( 320) 00:07:03.790 8721.329 - 8771.742: 62.0182% ( 327) 00:07:03.790 8771.742 - 8822.154: 64.1795% ( 314) 00:07:03.790 8822.154 - 8872.566: 66.3202% ( 311) 00:07:03.790 8872.566 - 8922.978: 68.3301% ( 292) 00:07:03.790 8922.978 - 8973.391: 70.1680% ( 267) 00:07:03.791 8973.391 - 9023.803: 71.8888% ( 250) 00:07:03.791 9023.803 - 9074.215: 73.5132% ( 236) 00:07:03.791 9074.215 - 9124.628: 75.0826% ( 228) 00:07:03.791 9124.628 - 9175.040: 76.5487% ( 213) 00:07:03.791 9175.040 - 9225.452: 77.8841% ( 194) 00:07:03.791 9225.452 - 9275.865: 79.1506% ( 184) 00:07:03.791 9275.865 - 9326.277: 80.2313% ( 157) 00:07:03.791 9326.277 - 9376.689: 81.2431% ( 147) 00:07:03.791 9376.689 - 9427.102: 82.1448% ( 131) 00:07:03.791 9427.102 - 9477.514: 82.9570% ( 118) 00:07:03.791 9477.514 - 9527.926: 83.6454% ( 100) 00:07:03.791 9527.926 - 9578.338: 84.2236% ( 84) 00:07:03.791 9578.338 - 9628.751: 84.8018% ( 84) 00:07:03.791 9628.751 - 9679.163: 85.3042% ( 73) 00:07:03.791 9679.163 - 9729.575: 85.7172% ( 60) 00:07:03.791 9729.575 - 9779.988: 86.1233% ( 59) 00:07:03.791 9779.988 - 9830.400: 86.4606% ( 49) 00:07:03.791 9830.400 - 9880.812: 86.7841% ( 47) 00:07:03.791 9880.812 - 9931.225: 87.0939% ( 45) 00:07:03.791 9931.225 - 9981.637: 87.3899% ( 43) 00:07:03.791 9981.637 - 10032.049: 87.6308% ( 35) 00:07:03.791 10032.049 - 10082.462: 87.8786% ( 36) 00:07:03.791 10082.462 - 10132.874: 88.1539% ( 40) 00:07:03.791 10132.874 - 10183.286: 88.4017% ( 36) 00:07:03.791 10183.286 - 10233.698: 88.6495% ( 36) 00:07:03.791 10233.698 - 10284.111: 88.8216% ( 25) 00:07:03.791 10284.111 - 10334.523: 89.0350% ( 31) 00:07:03.791 10334.523 - 10384.935: 89.2552% ( 32) 00:07:03.791 10384.935 - 10435.348: 89.4961% ( 35) 00:07:03.791 10435.348 - 10485.760: 89.7508% ( 37) 00:07:03.791 10485.760 - 10536.172: 89.9504% ( 29) 00:07:03.791 10536.172 - 10586.585: 90.1776% ( 33) 00:07:03.791 10586.585 - 10636.997: 90.4047% ( 33) 00:07:03.791 10636.997 - 10687.409: 90.6594% ( 37) 00:07:03.791 10687.409 - 10737.822: 90.8590% ( 29) 00:07:03.791 10737.822 - 10788.234: 91.0793% ( 32) 00:07:03.791 10788.234 - 10838.646: 91.2996% ( 32) 00:07:03.791 10838.646 - 10889.058: 91.5267% ( 33) 00:07:03.791 10889.058 - 10939.471: 91.7332% ( 30) 00:07:03.791 10939.471 - 10989.883: 91.9191% ( 27) 00:07:03.791 10989.883 - 11040.295: 92.0911% ( 25) 00:07:03.791 11040.295 - 11090.708: 92.2357% ( 21) 00:07:03.791 11090.708 - 11141.120: 92.3802% ( 21) 00:07:03.791 11141.120 - 11191.532: 92.5523% ( 25) 00:07:03.791 11191.532 - 11241.945: 92.6900% ( 20) 00:07:03.791 11241.945 - 11292.357: 92.8208% ( 19) 00:07:03.791 11292.357 - 11342.769: 92.9653% ( 21) 00:07:03.791 11342.769 - 11393.182: 93.0754% ( 16) 00:07:03.791 11393.182 - 11443.594: 93.1787% ( 15) 00:07:03.791 11443.594 - 11494.006: 93.2957% ( 17) 00:07:03.791 11494.006 - 11544.418: 93.3783% ( 12) 00:07:03.791 11544.418 - 11594.831: 93.4678% ( 13) 00:07:03.791 11594.831 - 11645.243: 93.5435% ( 11) 00:07:03.791 11645.243 - 11695.655: 93.6261% ( 12) 00:07:03.791 11695.655 - 11746.068: 93.7087% ( 12) 00:07:03.791 11746.068 - 11796.480: 93.7913% ( 12) 00:07:03.791 11796.480 - 11846.892: 93.8877% ( 14) 00:07:03.791 11846.892 - 11897.305: 93.9909% ( 15) 00:07:03.791 11897.305 - 11947.717: 94.1010% ( 16) 00:07:03.791 11947.717 - 11998.129: 94.2112% ( 16) 00:07:03.791 11998.129 - 12048.542: 94.3351% ( 18) 00:07:03.791 12048.542 - 12098.954: 94.4590% ( 18) 00:07:03.791 12098.954 - 12149.366: 94.5829% ( 18) 00:07:03.791 12149.366 - 12199.778: 94.7343% ( 22) 00:07:03.791 12199.778 - 12250.191: 94.8789% ( 21) 00:07:03.791 12250.191 - 12300.603: 95.0303% ( 22) 00:07:03.791 12300.603 - 12351.015: 95.1680% ( 20) 00:07:03.791 12351.015 - 12401.428: 95.3607% ( 28) 00:07:03.791 12401.428 - 12451.840: 95.4983% ( 20) 00:07:03.791 12451.840 - 12502.252: 95.6635% ( 24) 00:07:03.791 12502.252 - 12552.665: 95.8081% ( 21) 00:07:03.791 12552.665 - 12603.077: 95.9733% ( 24) 00:07:03.791 12603.077 - 12653.489: 96.0834% ( 16) 00:07:03.791 12653.489 - 12703.902: 96.2004% ( 17) 00:07:03.791 12703.902 - 12754.314: 96.3106% ( 16) 00:07:03.791 12754.314 - 12804.726: 96.4276% ( 17) 00:07:03.791 12804.726 - 12855.138: 96.5377% ( 16) 00:07:03.791 12855.138 - 12905.551: 96.6410% ( 15) 00:07:03.791 12905.551 - 13006.375: 96.8268% ( 27) 00:07:03.791 13006.375 - 13107.200: 96.9920% ( 24) 00:07:03.791 13107.200 - 13208.025: 97.1297% ( 20) 00:07:03.791 13208.025 - 13308.849: 97.2673% ( 20) 00:07:03.791 13308.849 - 13409.674: 97.3775% ( 16) 00:07:03.791 13409.674 - 13510.498: 97.4807% ( 15) 00:07:03.791 13510.498 - 13611.323: 97.5427% ( 9) 00:07:03.791 13611.323 - 13712.148: 97.6253% ( 12) 00:07:03.791 13712.148 - 13812.972: 97.7079% ( 12) 00:07:03.791 13812.972 - 13913.797: 97.8042% ( 14) 00:07:03.791 13913.797 - 14014.622: 97.8800% ( 11) 00:07:03.791 14014.622 - 14115.446: 97.9763% ( 14) 00:07:03.791 14115.446 - 14216.271: 98.0589% ( 12) 00:07:03.791 14216.271 - 14317.095: 98.1209% ( 9) 00:07:03.791 14317.095 - 14417.920: 98.1759% ( 8) 00:07:03.791 14417.920 - 14518.745: 98.2379% ( 9) 00:07:03.791 14518.745 - 14619.569: 98.3067% ( 10) 00:07:03.791 14619.569 - 14720.394: 98.3687% ( 9) 00:07:03.791 14720.394 - 14821.218: 98.4719% ( 15) 00:07:03.791 14821.218 - 14922.043: 98.5889% ( 17) 00:07:03.791 14922.043 - 15022.868: 98.6991% ( 16) 00:07:03.791 15022.868 - 15123.692: 98.8023% ( 15) 00:07:03.791 15123.692 - 15224.517: 98.8918% ( 13) 00:07:03.791 15224.517 - 15325.342: 98.9331% ( 6) 00:07:03.791 15325.342 - 15426.166: 98.9744% ( 6) 00:07:03.791 15426.166 - 15526.991: 99.0088% ( 5) 00:07:03.791 15526.991 - 15627.815: 99.0501% ( 6) 00:07:03.791 15627.815 - 15728.640: 99.0845% ( 5) 00:07:03.791 15728.640 - 15829.465: 99.1189% ( 5) 00:07:03.791 16636.062 - 16736.886: 99.1534% ( 5) 00:07:03.791 16736.886 - 16837.711: 99.1809% ( 4) 00:07:03.791 16837.711 - 16938.535: 99.2084% ( 4) 00:07:03.791 16938.535 - 17039.360: 99.2428% ( 5) 00:07:03.791 17039.360 - 17140.185: 99.2704% ( 4) 00:07:03.791 17140.185 - 17241.009: 99.2979% ( 4) 00:07:03.791 17241.009 - 17341.834: 99.3323% ( 5) 00:07:03.791 17341.834 - 17442.658: 99.3599% ( 4) 00:07:03.791 17442.658 - 17543.483: 99.3874% ( 4) 00:07:03.791 17543.483 - 17644.308: 99.4149% ( 4) 00:07:03.791 17644.308 - 17745.132: 99.4493% ( 5) 00:07:03.791 17745.132 - 17845.957: 99.4769% ( 4) 00:07:03.791 17845.957 - 17946.782: 99.5113% ( 5) 00:07:03.791 17946.782 - 18047.606: 99.5388% ( 4) 00:07:03.791 18047.606 - 18148.431: 99.5595% ( 3) 00:07:03.791 19257.502 - 19358.326: 99.5732% ( 2) 00:07:03.791 19358.326 - 19459.151: 99.6145% ( 6) 00:07:03.791 19459.151 - 19559.975: 99.6558% ( 6) 00:07:03.791 19559.975 - 19660.800: 99.6971% ( 6) 00:07:03.791 19660.800 - 19761.625: 99.7384% ( 6) 00:07:03.791 19761.625 - 19862.449: 99.7797% ( 6) 00:07:03.791 19862.449 - 19963.274: 99.8210% ( 6) 00:07:03.791 19963.274 - 20064.098: 99.8623% ( 6) 00:07:03.791 20064.098 - 20164.923: 99.9036% ( 6) 00:07:03.791 20164.923 - 20265.748: 99.9449% ( 6) 00:07:03.791 20265.748 - 20366.572: 99.9862% ( 6) 00:07:03.791 20366.572 - 20467.397: 100.0000% ( 2) 00:07:03.791 00:07:03.791 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:03.791 ============================================================================== 00:07:03.791 Range in us Cumulative IO count 00:07:03.791 3932.160 - 3957.366: 0.0138% ( 2) 00:07:03.791 3957.366 - 3982.572: 0.0551% ( 6) 00:07:03.791 3982.572 - 4007.778: 0.0688% ( 2) 00:07:03.791 4007.778 - 4032.985: 0.0757% ( 1) 00:07:03.791 4032.985 - 4058.191: 0.0826% ( 1) 00:07:03.791 4058.191 - 4083.397: 0.0895% ( 1) 00:07:03.791 4083.397 - 4108.603: 0.0964% ( 1) 00:07:03.791 4108.603 - 4133.809: 0.1170% ( 3) 00:07:03.791 4133.809 - 4159.015: 0.1377% ( 3) 00:07:03.791 4159.015 - 4184.222: 0.1514% ( 2) 00:07:03.791 4184.222 - 4209.428: 0.1652% ( 2) 00:07:03.791 4209.428 - 4234.634: 0.1790% ( 2) 00:07:03.791 4234.634 - 4259.840: 0.2065% ( 4) 00:07:03.791 4259.840 - 4285.046: 0.2203% ( 2) 00:07:03.791 4285.046 - 4310.252: 0.2340% ( 2) 00:07:03.791 4310.252 - 4335.458: 0.2478% ( 2) 00:07:03.791 4335.458 - 4360.665: 0.2616% ( 2) 00:07:03.791 4360.665 - 4385.871: 0.2753% ( 2) 00:07:03.791 4385.871 - 4411.077: 0.2891% ( 2) 00:07:03.791 4411.077 - 4436.283: 0.3029% ( 2) 00:07:03.791 4436.283 - 4461.489: 0.3166% ( 2) 00:07:03.791 4461.489 - 4486.695: 0.3304% ( 2) 00:07:03.791 4486.695 - 4511.902: 0.3442% ( 2) 00:07:03.791 4511.902 - 4537.108: 0.3579% ( 2) 00:07:03.791 4537.108 - 4562.314: 0.3648% ( 1) 00:07:03.791 4562.314 - 4587.520: 0.3717% ( 1) 00:07:03.791 4587.520 - 4612.726: 0.3923% ( 3) 00:07:03.791 4612.726 - 4637.932: 0.3992% ( 1) 00:07:03.791 4637.932 - 4663.138: 0.4199% ( 3) 00:07:03.791 4663.138 - 4688.345: 0.4336% ( 2) 00:07:03.791 4688.345 - 4713.551: 0.4405% ( 1) 00:07:03.791 6125.095 - 6150.302: 0.4681% ( 4) 00:07:03.791 6150.302 - 6175.508: 0.4818% ( 2) 00:07:03.791 6175.508 - 6200.714: 0.4887% ( 1) 00:07:03.791 6200.714 - 6225.920: 0.5025% ( 2) 00:07:03.791 6225.920 - 6251.126: 0.5162% ( 2) 00:07:03.791 6251.126 - 6276.332: 0.5300% ( 2) 00:07:03.791 6276.332 - 6301.538: 0.5575% ( 4) 00:07:03.791 6301.538 - 6326.745: 0.5713% ( 2) 00:07:03.791 6326.745 - 6351.951: 0.5851% ( 2) 00:07:03.791 6351.951 - 6377.157: 0.5920% ( 1) 00:07:03.791 6377.157 - 6402.363: 0.6057% ( 2) 00:07:03.791 6402.363 - 6427.569: 0.6195% ( 2) 00:07:03.791 6427.569 - 6452.775: 0.6333% ( 2) 00:07:03.791 6452.775 - 6503.188: 0.6608% ( 4) 00:07:03.791 6503.188 - 6553.600: 0.7227% ( 9) 00:07:03.791 6553.600 - 6604.012: 0.8535% ( 19) 00:07:03.791 6604.012 - 6654.425: 1.0187% ( 24) 00:07:03.791 6654.425 - 6704.837: 1.3078% ( 42) 00:07:03.791 6704.837 - 6755.249: 1.7208% ( 60) 00:07:03.791 6755.249 - 6805.662: 2.1820% ( 67) 00:07:03.791 6805.662 - 6856.074: 2.6363% ( 66) 00:07:03.791 6856.074 - 6906.486: 3.2902% ( 95) 00:07:03.791 6906.486 - 6956.898: 3.8753% ( 85) 00:07:03.791 6956.898 - 7007.311: 4.4948% ( 90) 00:07:03.791 7007.311 - 7057.723: 5.1831% ( 100) 00:07:03.791 7057.723 - 7108.135: 5.8645% ( 99) 00:07:03.791 7108.135 - 7158.548: 6.7869% ( 134) 00:07:03.791 7158.548 - 7208.960: 7.6886% ( 131) 00:07:03.791 7208.960 - 7259.372: 8.7142% ( 149) 00:07:03.791 7259.372 - 7309.785: 9.8293% ( 162) 00:07:03.791 7309.785 - 7360.197: 11.1165% ( 187) 00:07:03.792 7360.197 - 7410.609: 12.3761% ( 183) 00:07:03.792 7410.609 - 7461.022: 13.7183% ( 195) 00:07:03.792 7461.022 - 7511.434: 14.9711% ( 182) 00:07:03.792 7511.434 - 7561.846: 16.2789% ( 190) 00:07:03.792 7561.846 - 7612.258: 17.5867% ( 190) 00:07:03.792 7612.258 - 7662.671: 19.0184% ( 208) 00:07:03.792 7662.671 - 7713.083: 20.4020% ( 201) 00:07:03.792 7713.083 - 7763.495: 21.8475% ( 210) 00:07:03.792 7763.495 - 7813.908: 23.3136% ( 213) 00:07:03.792 7813.908 - 7864.320: 24.8830% ( 228) 00:07:03.792 7864.320 - 7914.732: 26.4661% ( 230) 00:07:03.792 7914.732 - 7965.145: 28.0906% ( 236) 00:07:03.792 7965.145 - 8015.557: 29.7907% ( 247) 00:07:03.792 8015.557 - 8065.969: 31.6286% ( 267) 00:07:03.792 8065.969 - 8116.382: 33.6247% ( 290) 00:07:03.792 8116.382 - 8166.794: 35.8618% ( 325) 00:07:03.792 8166.794 - 8217.206: 37.9818% ( 308) 00:07:03.792 8217.206 - 8267.618: 40.1569% ( 316) 00:07:03.792 8267.618 - 8318.031: 42.3802% ( 323) 00:07:03.792 8318.031 - 8368.443: 44.5209% ( 311) 00:07:03.792 8368.443 - 8418.855: 46.8268% ( 335) 00:07:03.792 8418.855 - 8469.268: 49.0983% ( 330) 00:07:03.792 8469.268 - 8519.680: 51.3422% ( 326) 00:07:03.792 8519.680 - 8570.092: 53.6825% ( 340) 00:07:03.792 8570.092 - 8620.505: 56.0573% ( 345) 00:07:03.792 8620.505 - 8670.917: 58.2874% ( 324) 00:07:03.792 8670.917 - 8721.329: 60.3937% ( 306) 00:07:03.792 8721.329 - 8771.742: 62.5138% ( 308) 00:07:03.792 8771.742 - 8822.154: 64.5650% ( 298) 00:07:03.792 8822.154 - 8872.566: 66.6713% ( 306) 00:07:03.792 8872.566 - 8922.978: 68.6743% ( 291) 00:07:03.792 8922.978 - 8973.391: 70.6567% ( 288) 00:07:03.792 8973.391 - 9023.803: 72.3362% ( 244) 00:07:03.792 9023.803 - 9074.215: 73.9331% ( 232) 00:07:03.792 9074.215 - 9124.628: 75.2753% ( 195) 00:07:03.792 9124.628 - 9175.040: 76.4937% ( 177) 00:07:03.792 9175.040 - 9225.452: 77.6088% ( 162) 00:07:03.792 9225.452 - 9275.865: 78.7238% ( 162) 00:07:03.792 9275.865 - 9326.277: 79.7770% ( 153) 00:07:03.792 9326.277 - 9376.689: 80.7957% ( 148) 00:07:03.792 9376.689 - 9427.102: 81.7112% ( 133) 00:07:03.792 9427.102 - 9477.514: 82.5578% ( 123) 00:07:03.792 9477.514 - 9527.926: 83.3425% ( 114) 00:07:03.792 9527.926 - 9578.338: 84.1272% ( 114) 00:07:03.792 9578.338 - 9628.751: 84.7260% ( 87) 00:07:03.792 9628.751 - 9679.163: 85.2423% ( 75) 00:07:03.792 9679.163 - 9729.575: 85.7585% ( 75) 00:07:03.792 9729.575 - 9779.988: 86.2266% ( 68) 00:07:03.792 9779.988 - 9830.400: 86.6121% ( 56) 00:07:03.792 9830.400 - 9880.812: 87.0388% ( 62) 00:07:03.792 9880.812 - 9931.225: 87.3899% ( 51) 00:07:03.792 9931.225 - 9981.637: 87.7134% ( 47) 00:07:03.792 9981.637 - 10032.049: 87.9887% ( 40) 00:07:03.792 10032.049 - 10082.462: 88.2503% ( 38) 00:07:03.792 10082.462 - 10132.874: 88.4981% ( 36) 00:07:03.792 10132.874 - 10183.286: 88.7046% ( 30) 00:07:03.792 10183.286 - 10233.698: 88.9042% ( 29) 00:07:03.792 10233.698 - 10284.111: 89.1313% ( 33) 00:07:03.792 10284.111 - 10334.523: 89.3516% ( 32) 00:07:03.792 10334.523 - 10384.935: 89.5512% ( 29) 00:07:03.792 10384.935 - 10435.348: 89.7095% ( 23) 00:07:03.792 10435.348 - 10485.760: 89.8678% ( 23) 00:07:03.792 10485.760 - 10536.172: 90.0537% ( 27) 00:07:03.792 10536.172 - 10586.585: 90.2740% ( 32) 00:07:03.792 10586.585 - 10636.997: 90.4598% ( 27) 00:07:03.792 10636.997 - 10687.409: 90.6732% ( 31) 00:07:03.792 10687.409 - 10737.822: 90.8934% ( 32) 00:07:03.792 10737.822 - 10788.234: 91.0931% ( 29) 00:07:03.792 10788.234 - 10838.646: 91.2927% ( 29) 00:07:03.792 10838.646 - 10889.058: 91.4579% ( 24) 00:07:03.792 10889.058 - 10939.471: 91.6437% ( 27) 00:07:03.792 10939.471 - 10989.883: 91.7952% ( 22) 00:07:03.792 10989.883 - 11040.295: 91.9810% ( 27) 00:07:03.792 11040.295 - 11090.708: 92.1531% ( 25) 00:07:03.792 11090.708 - 11141.120: 92.3045% ( 22) 00:07:03.792 11141.120 - 11191.532: 92.4422% ( 20) 00:07:03.792 11191.532 - 11241.945: 92.5936% ( 22) 00:07:03.792 11241.945 - 11292.357: 92.7175% ( 18) 00:07:03.792 11292.357 - 11342.769: 92.8689% ( 22) 00:07:03.792 11342.769 - 11393.182: 93.0273% ( 23) 00:07:03.792 11393.182 - 11443.594: 93.1787% ( 22) 00:07:03.792 11443.594 - 11494.006: 93.2682% ( 13) 00:07:03.792 11494.006 - 11544.418: 93.3990% ( 19) 00:07:03.792 11544.418 - 11594.831: 93.5160% ( 17) 00:07:03.792 11594.831 - 11645.243: 93.6605% ( 21) 00:07:03.792 11645.243 - 11695.655: 93.7913% ( 19) 00:07:03.792 11695.655 - 11746.068: 93.9083% ( 17) 00:07:03.792 11746.068 - 11796.480: 94.0460% ( 20) 00:07:03.792 11796.480 - 11846.892: 94.1768% ( 19) 00:07:03.792 11846.892 - 11897.305: 94.3007% ( 18) 00:07:03.792 11897.305 - 11947.717: 94.4108% ( 16) 00:07:03.792 11947.717 - 11998.129: 94.5278% ( 17) 00:07:03.792 11998.129 - 12048.542: 94.6724% ( 21) 00:07:03.792 12048.542 - 12098.954: 94.7756% ( 15) 00:07:03.792 12098.954 - 12149.366: 94.8995% ( 18) 00:07:03.792 12149.366 - 12199.778: 95.0096% ( 16) 00:07:03.792 12199.778 - 12250.191: 95.1473% ( 20) 00:07:03.792 12250.191 - 12300.603: 95.2781% ( 19) 00:07:03.792 12300.603 - 12351.015: 95.3951% ( 17) 00:07:03.792 12351.015 - 12401.428: 95.5465% ( 22) 00:07:03.792 12401.428 - 12451.840: 95.6911% ( 21) 00:07:03.792 12451.840 - 12502.252: 95.8219% ( 19) 00:07:03.792 12502.252 - 12552.665: 95.9595% ( 20) 00:07:03.792 12552.665 - 12603.077: 96.1247% ( 24) 00:07:03.792 12603.077 - 12653.489: 96.2349% ( 16) 00:07:03.792 12653.489 - 12703.902: 96.3450% ( 16) 00:07:03.792 12703.902 - 12754.314: 96.4758% ( 19) 00:07:03.792 12754.314 - 12804.726: 96.5790% ( 15) 00:07:03.792 12804.726 - 12855.138: 96.6616% ( 12) 00:07:03.792 12855.138 - 12905.551: 96.7649% ( 15) 00:07:03.792 12905.551 - 13006.375: 96.9163% ( 22) 00:07:03.792 13006.375 - 13107.200: 97.0402% ( 18) 00:07:03.792 13107.200 - 13208.025: 97.1779% ( 20) 00:07:03.792 13208.025 - 13308.849: 97.2811% ( 15) 00:07:03.792 13308.849 - 13409.674: 97.3637% ( 12) 00:07:03.792 13409.674 - 13510.498: 97.4601% ( 14) 00:07:03.792 13510.498 - 13611.323: 97.5633% ( 15) 00:07:03.792 13611.323 - 13712.148: 97.6528% ( 13) 00:07:03.792 13712.148 - 13812.972: 97.7423% ( 13) 00:07:03.792 13812.972 - 13913.797: 97.7698% ( 4) 00:07:03.792 13913.797 - 14014.622: 97.7974% ( 4) 00:07:03.792 14115.446 - 14216.271: 97.8387% ( 6) 00:07:03.792 14216.271 - 14317.095: 97.9144% ( 11) 00:07:03.792 14317.095 - 14417.920: 97.9763% ( 9) 00:07:03.792 14417.920 - 14518.745: 98.0727% ( 14) 00:07:03.792 14518.745 - 14619.569: 98.1897% ( 17) 00:07:03.792 14619.569 - 14720.394: 98.2861% ( 14) 00:07:03.792 14720.394 - 14821.218: 98.3962% ( 16) 00:07:03.792 14821.218 - 14922.043: 98.5063% ( 16) 00:07:03.792 14922.043 - 15022.868: 98.6096% ( 15) 00:07:03.792 15022.868 - 15123.692: 98.7197% ( 16) 00:07:03.792 15123.692 - 15224.517: 98.8230% ( 15) 00:07:03.792 15224.517 - 15325.342: 98.8987% ( 11) 00:07:03.792 15325.342 - 15426.166: 98.9675% ( 10) 00:07:03.792 15426.166 - 15526.991: 98.9950% ( 4) 00:07:03.792 15526.991 - 15627.815: 99.0226% ( 4) 00:07:03.792 15627.815 - 15728.640: 99.0501% ( 4) 00:07:03.792 15728.640 - 15829.465: 99.0845% ( 5) 00:07:03.792 15829.465 - 15930.289: 99.1189% ( 5) 00:07:03.792 16535.237 - 16636.062: 99.1465% ( 4) 00:07:03.792 16636.062 - 16736.886: 99.1740% ( 4) 00:07:03.792 16736.886 - 16837.711: 99.2084% ( 5) 00:07:03.792 16837.711 - 16938.535: 99.2428% ( 5) 00:07:03.792 16938.535 - 17039.360: 99.2704% ( 4) 00:07:03.792 17039.360 - 17140.185: 99.3048% ( 5) 00:07:03.792 17140.185 - 17241.009: 99.3323% ( 4) 00:07:03.792 17241.009 - 17341.834: 99.3667% ( 5) 00:07:03.792 17341.834 - 17442.658: 99.3943% ( 4) 00:07:03.792 17442.658 - 17543.483: 99.4218% ( 4) 00:07:03.792 17543.483 - 17644.308: 99.4562% ( 5) 00:07:03.792 17644.308 - 17745.132: 99.4906% ( 5) 00:07:03.792 17745.132 - 17845.957: 99.5182% ( 4) 00:07:03.792 17845.957 - 17946.782: 99.5526% ( 5) 00:07:03.792 17946.782 - 18047.606: 99.5595% ( 1) 00:07:03.792 19257.502 - 19358.326: 99.5732% ( 2) 00:07:03.792 19358.326 - 19459.151: 99.6145% ( 6) 00:07:03.792 19459.151 - 19559.975: 99.6558% ( 6) 00:07:03.792 19559.975 - 19660.800: 99.7040% ( 7) 00:07:03.792 19660.800 - 19761.625: 99.7453% ( 6) 00:07:03.792 19761.625 - 19862.449: 99.7866% ( 6) 00:07:03.792 19862.449 - 19963.274: 99.8279% ( 6) 00:07:03.792 19963.274 - 20064.098: 99.8692% ( 6) 00:07:03.792 20064.098 - 20164.923: 99.9174% ( 7) 00:07:03.792 20164.923 - 20265.748: 99.9587% ( 6) 00:07:03.792 20265.748 - 20366.572: 100.0000% ( 6) 00:07:03.792 00:07:03.792 10:38:24 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:05.167 Initializing NVMe Controllers 00:07:05.167 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:05.167 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:05.167 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:05.167 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:05.167 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:05.167 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:05.167 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:05.167 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:05.167 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:05.167 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:05.167 Initialization complete. Launching workers. 00:07:05.167 ======================================================== 00:07:05.167 Latency(us) 00:07:05.167 Device Information : IOPS MiB/s Average min max 00:07:05.167 PCIE (0000:00:13.0) NSID 1 from core 0: 15911.13 186.46 8046.80 5376.89 28821.76 00:07:05.167 PCIE (0000:00:10.0) NSID 1 from core 0: 15911.13 186.46 8038.06 4998.48 28921.70 00:07:05.167 PCIE (0000:00:11.0) NSID 1 from core 0: 15911.13 186.46 8029.63 4576.24 28267.66 00:07:05.167 PCIE (0000:00:12.0) NSID 1 from core 0: 15911.13 186.46 8021.42 3798.12 28344.84 00:07:05.167 PCIE (0000:00:12.0) NSID 2 from core 0: 15911.13 186.46 8013.21 3649.67 27492.92 00:07:05.167 PCIE (0000:00:12.0) NSID 3 from core 0: 15911.13 186.46 8005.08 3379.69 26600.27 00:07:05.167 ======================================================== 00:07:05.167 Total : 95466.79 1118.75 8025.70 3379.69 28921.70 00:07:05.167 00:07:05.167 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:05.167 ================================================================================= 00:07:05.167 1.00000% : 6604.012us 00:07:05.167 10.00000% : 7007.311us 00:07:05.167 25.00000% : 7208.960us 00:07:05.167 50.00000% : 7511.434us 00:07:05.167 75.00000% : 8217.206us 00:07:05.167 90.00000% : 9628.751us 00:07:05.167 95.00000% : 10737.822us 00:07:05.167 98.00000% : 13107.200us 00:07:05.167 99.00000% : 14115.446us 00:07:05.167 99.50000% : 19660.800us 00:07:05.167 99.90000% : 28432.542us 00:07:05.167 99.99000% : 28835.840us 00:07:05.167 99.99900% : 28835.840us 00:07:05.167 99.99990% : 28835.840us 00:07:05.167 99.99999% : 28835.840us 00:07:05.167 00:07:05.167 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:05.167 ================================================================================= 00:07:05.167 1.00000% : 6503.188us 00:07:05.167 10.00000% : 6906.486us 00:07:05.167 25.00000% : 7158.548us 00:07:05.167 50.00000% : 7511.434us 00:07:05.167 75.00000% : 8267.618us 00:07:05.167 90.00000% : 9729.575us 00:07:05.167 95.00000% : 10939.471us 00:07:05.167 98.00000% : 13208.025us 00:07:05.167 99.00000% : 14014.622us 00:07:05.167 99.50000% : 19761.625us 00:07:05.167 99.90000% : 28432.542us 00:07:05.167 99.99000% : 29037.489us 00:07:05.167 99.99900% : 29037.489us 00:07:05.167 99.99990% : 29037.489us 00:07:05.167 99.99999% : 29037.489us 00:07:05.167 00:07:05.167 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:05.167 ================================================================================= 00:07:05.167 1.00000% : 6654.425us 00:07:05.167 10.00000% : 7007.311us 00:07:05.167 25.00000% : 7208.960us 00:07:05.167 50.00000% : 7461.022us 00:07:05.167 75.00000% : 8217.206us 00:07:05.167 90.00000% : 9880.812us 00:07:05.167 95.00000% : 10939.471us 00:07:05.167 98.00000% : 13308.849us 00:07:05.167 99.00000% : 13913.797us 00:07:05.167 99.50000% : 20064.098us 00:07:05.167 99.90000% : 27827.594us 00:07:05.167 99.99000% : 28432.542us 00:07:05.167 99.99900% : 28432.542us 00:07:05.167 99.99990% : 28432.542us 00:07:05.167 99.99999% : 28432.542us 00:07:05.167 00:07:05.167 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:05.167 ================================================================================= 00:07:05.167 1.00000% : 6553.600us 00:07:05.167 10.00000% : 7007.311us 00:07:05.167 25.00000% : 7208.960us 00:07:05.167 50.00000% : 7461.022us 00:07:05.167 75.00000% : 8166.794us 00:07:05.167 90.00000% : 9830.400us 00:07:05.167 95.00000% : 10737.822us 00:07:05.167 98.00000% : 13208.025us 00:07:05.167 99.00000% : 14216.271us 00:07:05.167 99.50000% : 19862.449us 00:07:05.167 99.90000% : 28230.892us 00:07:05.167 99.99000% : 28432.542us 00:07:05.167 99.99900% : 28432.542us 00:07:05.167 99.99990% : 28432.542us 00:07:05.167 99.99999% : 28432.542us 00:07:05.167 00:07:05.167 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:05.167 ================================================================================= 00:07:05.167 1.00000% : 6503.188us 00:07:05.167 10.00000% : 7007.311us 00:07:05.167 25.00000% : 7208.960us 00:07:05.167 50.00000% : 7461.022us 00:07:05.167 75.00000% : 8166.794us 00:07:05.167 90.00000% : 9679.163us 00:07:05.167 95.00000% : 10586.585us 00:07:05.167 98.00000% : 12905.551us 00:07:05.167 99.00000% : 14216.271us 00:07:05.167 99.50000% : 20064.098us 00:07:05.167 99.90000% : 27424.295us 00:07:05.167 99.99000% : 27625.945us 00:07:05.167 99.99900% : 27625.945us 00:07:05.167 99.99990% : 27625.945us 00:07:05.167 99.99999% : 27625.945us 00:07:05.167 00:07:05.167 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:05.167 ================================================================================= 00:07:05.167 1.00000% : 6503.188us 00:07:05.167 10.00000% : 7007.311us 00:07:05.167 25.00000% : 7208.960us 00:07:05.167 50.00000% : 7461.022us 00:07:05.167 75.00000% : 8166.794us 00:07:05.167 90.00000% : 9578.338us 00:07:05.167 95.00000% : 10485.760us 00:07:05.167 98.00000% : 12905.551us 00:07:05.167 99.00000% : 13812.972us 00:07:05.167 99.50000% : 20265.748us 00:07:05.167 99.90000% : 26416.049us 00:07:05.167 99.99000% : 26617.698us 00:07:05.167 99.99900% : 26617.698us 00:07:05.167 99.99990% : 26617.698us 00:07:05.167 99.99999% : 26617.698us 00:07:05.167 00:07:05.167 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:05.167 ============================================================================== 00:07:05.167 Range in us Cumulative IO count 00:07:05.167 5368.911 - 5394.117: 0.0063% ( 1) 00:07:05.167 5419.323 - 5444.529: 0.0188% ( 2) 00:07:05.167 5444.529 - 5469.735: 0.0314% ( 2) 00:07:05.167 5469.735 - 5494.942: 0.0502% ( 3) 00:07:05.167 5494.942 - 5520.148: 0.1130% ( 10) 00:07:05.167 5520.148 - 5545.354: 0.2071% ( 15) 00:07:05.167 5545.354 - 5570.560: 0.2824% ( 12) 00:07:05.167 5570.560 - 5595.766: 0.3012% ( 3) 00:07:05.167 5595.766 - 5620.972: 0.3200% ( 3) 00:07:05.167 5620.972 - 5646.178: 0.3326% ( 2) 00:07:05.167 5646.178 - 5671.385: 0.3451% ( 2) 00:07:05.167 5671.385 - 5696.591: 0.3640% ( 3) 00:07:05.167 5696.591 - 5721.797: 0.3765% ( 2) 00:07:05.167 5721.797 - 5747.003: 0.3891% ( 2) 00:07:05.167 5747.003 - 5772.209: 0.4016% ( 2) 00:07:05.167 6301.538 - 6326.745: 0.4079% ( 1) 00:07:05.167 6377.157 - 6402.363: 0.4267% ( 3) 00:07:05.167 6402.363 - 6427.569: 0.4518% ( 4) 00:07:05.167 6427.569 - 6452.775: 0.4832% ( 5) 00:07:05.167 6452.775 - 6503.188: 0.6589% ( 28) 00:07:05.167 6503.188 - 6553.600: 0.8722% ( 34) 00:07:05.167 6553.600 - 6604.012: 1.1860% ( 50) 00:07:05.167 6604.012 - 6654.425: 1.8449% ( 105) 00:07:05.167 6654.425 - 6704.837: 2.4473% ( 96) 00:07:05.167 6704.837 - 6755.249: 3.1878% ( 118) 00:07:05.167 6755.249 - 6805.662: 4.1478% ( 153) 00:07:05.167 6805.662 - 6856.074: 5.3464% ( 191) 00:07:05.167 6856.074 - 6906.486: 6.9591% ( 257) 00:07:05.167 6906.486 - 6956.898: 8.6283% ( 266) 00:07:05.167 6956.898 - 7007.311: 10.9940% ( 377) 00:07:05.167 7007.311 - 7057.723: 14.0876% ( 493) 00:07:05.167 7057.723 - 7108.135: 17.4887% ( 542) 00:07:05.167 7108.135 - 7158.548: 21.1032% ( 576) 00:07:05.167 7158.548 - 7208.960: 25.8597% ( 758) 00:07:05.167 7208.960 - 7259.372: 30.7794% ( 784) 00:07:05.167 7259.372 - 7309.785: 35.4606% ( 746) 00:07:05.167 7309.785 - 7360.197: 40.3614% ( 781) 00:07:05.167 7360.197 - 7410.609: 45.2246% ( 775) 00:07:05.167 7410.609 - 7461.022: 49.6047% ( 698) 00:07:05.167 7461.022 - 7511.434: 53.3258% ( 593) 00:07:05.167 7511.434 - 7561.846: 56.6893% ( 536) 00:07:05.167 7561.846 - 7612.258: 60.2598% ( 569) 00:07:05.167 7612.258 - 7662.671: 63.0961% ( 452) 00:07:05.167 7662.671 - 7713.083: 64.9661% ( 298) 00:07:05.167 7713.083 - 7763.495: 66.6855% ( 274) 00:07:05.167 7763.495 - 7813.908: 68.0284% ( 214) 00:07:05.167 7813.908 - 7864.320: 69.4905% ( 233) 00:07:05.167 7864.320 - 7914.732: 70.8835% ( 222) 00:07:05.167 7914.732 - 7965.145: 71.8248% ( 150) 00:07:05.167 7965.145 - 8015.557: 72.6155% ( 126) 00:07:05.167 8015.557 - 8065.969: 73.4061% ( 126) 00:07:05.167 8065.969 - 8116.382: 74.0587% ( 104) 00:07:05.167 8116.382 - 8166.794: 74.9812% ( 147) 00:07:05.168 8166.794 - 8217.206: 75.7844% ( 128) 00:07:05.168 8217.206 - 8267.618: 76.2989% ( 82) 00:07:05.168 8267.618 - 8318.031: 76.7821% ( 77) 00:07:05.168 8318.031 - 8368.443: 77.3720% ( 94) 00:07:05.168 8368.443 - 8418.855: 77.9430% ( 91) 00:07:05.168 8418.855 - 8469.268: 78.3133% ( 59) 00:07:05.168 8469.268 - 8519.680: 78.6772% ( 58) 00:07:05.168 8519.680 - 8570.092: 79.0976% ( 67) 00:07:05.168 8570.092 - 8620.505: 79.6373% ( 86) 00:07:05.168 8620.505 - 8670.917: 80.1268% ( 78) 00:07:05.168 8670.917 - 8721.329: 80.6225% ( 79) 00:07:05.168 8721.329 - 8771.742: 81.1998% ( 92) 00:07:05.168 8771.742 - 8822.154: 81.8148% ( 98) 00:07:05.168 8822.154 - 8872.566: 82.6933% ( 140) 00:07:05.168 8872.566 - 8922.978: 83.2078% ( 82) 00:07:05.168 8922.978 - 8973.391: 83.7412% ( 85) 00:07:05.168 8973.391 - 9023.803: 84.1930% ( 72) 00:07:05.168 9023.803 - 9074.215: 84.8896% ( 111) 00:07:05.168 9074.215 - 9124.628: 85.3225% ( 69) 00:07:05.168 9124.628 - 9175.040: 85.7681% ( 71) 00:07:05.168 9175.040 - 9225.452: 86.1885% ( 67) 00:07:05.168 9225.452 - 9275.865: 86.6152% ( 68) 00:07:05.168 9275.865 - 9326.277: 86.9792% ( 58) 00:07:05.168 9326.277 - 9376.689: 87.4749% ( 79) 00:07:05.168 9376.689 - 9427.102: 88.1840% ( 113) 00:07:05.168 9427.102 - 9477.514: 88.5730% ( 62) 00:07:05.168 9477.514 - 9527.926: 89.0186% ( 71) 00:07:05.168 9527.926 - 9578.338: 89.5018% ( 77) 00:07:05.168 9578.338 - 9628.751: 90.0540% ( 88) 00:07:05.168 9628.751 - 9679.163: 90.5622% ( 81) 00:07:05.168 9679.163 - 9729.575: 91.1333% ( 91) 00:07:05.168 9729.575 - 9779.988: 91.5600% ( 68) 00:07:05.168 9779.988 - 9830.400: 91.8361% ( 44) 00:07:05.168 9830.400 - 9880.812: 92.1624% ( 52) 00:07:05.168 9880.812 - 9931.225: 92.4573% ( 47) 00:07:05.168 9931.225 - 9981.637: 92.7836% ( 52) 00:07:05.168 9981.637 - 10032.049: 93.0346% ( 40) 00:07:05.168 10032.049 - 10082.462: 93.2103% ( 28) 00:07:05.168 10082.462 - 10132.874: 93.3923% ( 29) 00:07:05.168 10132.874 - 10183.286: 93.6370% ( 39) 00:07:05.168 10183.286 - 10233.698: 93.8190% ( 29) 00:07:05.168 10233.698 - 10284.111: 93.9947% ( 28) 00:07:05.168 10284.111 - 10334.523: 94.1516% ( 25) 00:07:05.168 10334.523 - 10384.935: 94.3148% ( 26) 00:07:05.168 10384.935 - 10435.348: 94.4842% ( 27) 00:07:05.168 10435.348 - 10485.760: 94.5720% ( 14) 00:07:05.168 10485.760 - 10536.172: 94.6850% ( 18) 00:07:05.168 10536.172 - 10586.585: 94.8293% ( 23) 00:07:05.168 10586.585 - 10636.997: 94.9172% ( 14) 00:07:05.168 10636.997 - 10687.409: 94.9987% ( 13) 00:07:05.168 10687.409 - 10737.822: 95.0866% ( 14) 00:07:05.168 10737.822 - 10788.234: 95.1431% ( 9) 00:07:05.168 10788.234 - 10838.646: 95.1995% ( 9) 00:07:05.168 10838.646 - 10889.058: 95.2623% ( 10) 00:07:05.168 10889.058 - 10939.471: 95.3188% ( 9) 00:07:05.168 10939.471 - 10989.883: 95.3376% ( 3) 00:07:05.168 10989.883 - 11040.295: 95.3627% ( 4) 00:07:05.168 11040.295 - 11090.708: 95.3878% ( 4) 00:07:05.168 11090.708 - 11141.120: 95.4066% ( 3) 00:07:05.168 11141.120 - 11191.532: 95.4317% ( 4) 00:07:05.168 11191.532 - 11241.945: 95.4631% ( 5) 00:07:05.168 11241.945 - 11292.357: 95.4819% ( 3) 00:07:05.168 11292.357 - 11342.769: 95.5070% ( 4) 00:07:05.168 11342.769 - 11393.182: 95.5259% ( 3) 00:07:05.168 11393.182 - 11443.594: 95.5698% ( 7) 00:07:05.168 11443.594 - 11494.006: 95.6263% ( 9) 00:07:05.168 11494.006 - 11544.418: 95.7078% ( 13) 00:07:05.168 11544.418 - 11594.831: 95.7706% ( 10) 00:07:05.168 11594.831 - 11645.243: 95.8459% ( 12) 00:07:05.168 11645.243 - 11695.655: 96.0341% ( 30) 00:07:05.168 11695.655 - 11746.068: 96.1408% ( 17) 00:07:05.168 11746.068 - 11796.480: 96.2161% ( 12) 00:07:05.168 11796.480 - 11846.892: 96.2789% ( 10) 00:07:05.168 11846.892 - 11897.305: 96.3291% ( 8) 00:07:05.168 11897.305 - 11947.717: 96.3604% ( 5) 00:07:05.168 11947.717 - 11998.129: 96.3918% ( 5) 00:07:05.168 11998.129 - 12048.542: 96.4295% ( 6) 00:07:05.168 12048.542 - 12098.954: 96.4546% ( 4) 00:07:05.168 12098.954 - 12149.366: 96.4922% ( 6) 00:07:05.168 12149.366 - 12199.778: 96.5299% ( 6) 00:07:05.168 12199.778 - 12250.191: 96.5801% ( 8) 00:07:05.168 12250.191 - 12300.603: 96.6240% ( 7) 00:07:05.168 12300.603 - 12351.015: 96.6742% ( 8) 00:07:05.168 12351.015 - 12401.428: 96.7118% ( 6) 00:07:05.168 12401.428 - 12451.840: 96.7746% ( 10) 00:07:05.168 12451.840 - 12502.252: 96.8562% ( 13) 00:07:05.168 12502.252 - 12552.665: 96.9691% ( 18) 00:07:05.168 12552.665 - 12603.077: 97.1009% ( 21) 00:07:05.168 12603.077 - 12653.489: 97.2703% ( 27) 00:07:05.168 12653.489 - 12703.902: 97.4021% ( 21) 00:07:05.168 12703.902 - 12754.314: 97.4900% ( 14) 00:07:05.168 12754.314 - 12804.726: 97.5527% ( 10) 00:07:05.168 12804.726 - 12855.138: 97.6217% ( 11) 00:07:05.168 12855.138 - 12905.551: 97.6782% ( 9) 00:07:05.168 12905.551 - 13006.375: 97.8476% ( 27) 00:07:05.168 13006.375 - 13107.200: 98.0233% ( 28) 00:07:05.168 13107.200 - 13208.025: 98.1865% ( 26) 00:07:05.168 13208.025 - 13308.849: 98.3057% ( 19) 00:07:05.168 13308.849 - 13409.674: 98.3622% ( 9) 00:07:05.168 13409.674 - 13510.498: 98.4061% ( 7) 00:07:05.168 13510.498 - 13611.323: 98.4626% ( 9) 00:07:05.168 13611.323 - 13712.148: 98.5630% ( 16) 00:07:05.168 13712.148 - 13812.972: 98.6822% ( 19) 00:07:05.168 13812.972 - 13913.797: 98.8893% ( 33) 00:07:05.168 13913.797 - 14014.622: 98.9834% ( 15) 00:07:05.168 14014.622 - 14115.446: 99.0462% ( 10) 00:07:05.168 14115.446 - 14216.271: 99.0964% ( 8) 00:07:05.168 14216.271 - 14317.095: 99.1403% ( 7) 00:07:05.168 14317.095 - 14417.920: 99.1780% ( 6) 00:07:05.168 14417.920 - 14518.745: 99.1968% ( 3) 00:07:05.168 18350.080 - 18450.905: 99.2219% ( 4) 00:07:05.168 18450.905 - 18551.729: 99.2784% ( 9) 00:07:05.168 18551.729 - 18652.554: 99.3223% ( 7) 00:07:05.168 18652.554 - 18753.378: 99.3411% ( 3) 00:07:05.168 18753.378 - 18854.203: 99.3725% ( 5) 00:07:05.168 18854.203 - 18955.028: 99.3976% ( 4) 00:07:05.168 18955.028 - 19055.852: 99.4290% ( 5) 00:07:05.168 19055.852 - 19156.677: 99.4541% ( 4) 00:07:05.168 19156.677 - 19257.502: 99.4729% ( 3) 00:07:05.168 19358.326 - 19459.151: 99.4854% ( 2) 00:07:05.168 19459.151 - 19559.975: 99.4980% ( 2) 00:07:05.168 19559.975 - 19660.800: 99.5168% ( 3) 00:07:05.168 19660.800 - 19761.625: 99.5356% ( 3) 00:07:05.168 19761.625 - 19862.449: 99.5545% ( 3) 00:07:05.168 19862.449 - 19963.274: 99.5733% ( 3) 00:07:05.168 19963.274 - 20064.098: 99.5921% ( 3) 00:07:05.168 20064.098 - 20164.923: 99.5984% ( 1) 00:07:05.168 26214.400 - 26416.049: 99.6109% ( 2) 00:07:05.168 26416.049 - 26617.698: 99.6298% ( 3) 00:07:05.168 26617.698 - 26819.348: 99.6486% ( 3) 00:07:05.168 26819.348 - 27020.997: 99.6674% ( 3) 00:07:05.168 27020.997 - 27222.646: 99.7051% ( 6) 00:07:05.168 27222.646 - 27424.295: 99.7490% ( 7) 00:07:05.168 27424.295 - 27625.945: 99.7804% ( 5) 00:07:05.168 27625.945 - 27827.594: 99.8180% ( 6) 00:07:05.168 27827.594 - 28029.243: 99.8557% ( 6) 00:07:05.168 28029.243 - 28230.892: 99.8870% ( 5) 00:07:05.168 28230.892 - 28432.542: 99.9247% ( 6) 00:07:05.168 28432.542 - 28634.191: 99.9561% ( 5) 00:07:05.168 28634.191 - 28835.840: 100.0000% ( 7) 00:07:05.168 00:07:05.168 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:05.168 ============================================================================== 00:07:05.168 Range in us Cumulative IO count 00:07:05.168 4990.818 - 5016.025: 0.0188% ( 3) 00:07:05.168 5016.025 - 5041.231: 0.0377% ( 3) 00:07:05.168 5041.231 - 5066.437: 0.0565% ( 3) 00:07:05.168 5066.437 - 5091.643: 0.1004% ( 7) 00:07:05.168 5091.643 - 5116.849: 0.1757% ( 12) 00:07:05.168 5116.849 - 5142.055: 0.2322% ( 9) 00:07:05.168 5142.055 - 5167.262: 0.2573% ( 4) 00:07:05.168 5167.262 - 5192.468: 0.2636% ( 1) 00:07:05.168 5217.674 - 5242.880: 0.2761% ( 2) 00:07:05.168 5242.880 - 5268.086: 0.2824% ( 1) 00:07:05.168 5268.086 - 5293.292: 0.2887% ( 1) 00:07:05.168 5293.292 - 5318.498: 0.3075% ( 3) 00:07:05.168 5318.498 - 5343.705: 0.3326% ( 4) 00:07:05.168 5343.705 - 5368.911: 0.3389% ( 1) 00:07:05.168 5368.911 - 5394.117: 0.3451% ( 1) 00:07:05.168 5394.117 - 5419.323: 0.3514% ( 1) 00:07:05.168 5419.323 - 5444.529: 0.3640% ( 2) 00:07:05.168 5444.529 - 5469.735: 0.3702% ( 1) 00:07:05.168 5545.354 - 5570.560: 0.3891% ( 3) 00:07:05.168 5570.560 - 5595.766: 0.4016% ( 2) 00:07:05.168 6225.920 - 6251.126: 0.4079% ( 1) 00:07:05.168 6251.126 - 6276.332: 0.4142% ( 1) 00:07:05.168 6276.332 - 6301.538: 0.4204% ( 1) 00:07:05.168 6301.538 - 6326.745: 0.4832% ( 10) 00:07:05.168 6326.745 - 6351.951: 0.5522% ( 11) 00:07:05.168 6351.951 - 6377.157: 0.6338% ( 13) 00:07:05.168 6377.157 - 6402.363: 0.7279% ( 15) 00:07:05.168 6402.363 - 6427.569: 0.7907% ( 10) 00:07:05.168 6427.569 - 6452.775: 0.8911% ( 16) 00:07:05.168 6452.775 - 6503.188: 1.0166% ( 20) 00:07:05.168 6503.188 - 6553.600: 1.3680% ( 56) 00:07:05.168 6553.600 - 6604.012: 2.0143% ( 103) 00:07:05.168 6604.012 - 6654.425: 2.7799% ( 122) 00:07:05.168 6654.425 - 6704.837: 3.6333% ( 136) 00:07:05.168 6704.837 - 6755.249: 4.4930% ( 137) 00:07:05.168 6755.249 - 6805.662: 6.0680% ( 251) 00:07:05.168 6805.662 - 6856.074: 8.1074% ( 325) 00:07:05.168 6856.074 - 6906.486: 10.4355% ( 371) 00:07:05.168 6906.486 - 6956.898: 12.9393% ( 399) 00:07:05.168 6956.898 - 7007.311: 16.3153% ( 538) 00:07:05.168 7007.311 - 7057.723: 19.7917% ( 554) 00:07:05.168 7057.723 - 7108.135: 22.9920% ( 510) 00:07:05.168 7108.135 - 7158.548: 26.7006% ( 591) 00:07:05.168 7158.548 - 7208.960: 30.5974% ( 621) 00:07:05.168 7208.960 - 7259.372: 34.6888% ( 652) 00:07:05.168 7259.372 - 7309.785: 38.3409% ( 582) 00:07:05.168 7309.785 - 7360.197: 41.7169% ( 538) 00:07:05.168 7360.197 - 7410.609: 44.9674% ( 518) 00:07:05.168 7410.609 - 7461.022: 48.1112% ( 501) 00:07:05.168 7461.022 - 7511.434: 51.0354% ( 466) 00:07:05.168 7511.434 - 7561.846: 53.7023% ( 425) 00:07:05.168 7561.846 - 7612.258: 56.1747% ( 394) 00:07:05.169 7612.258 - 7662.671: 58.6973% ( 402) 00:07:05.169 7662.671 - 7713.083: 60.8873% ( 349) 00:07:05.169 7713.083 - 7763.495: 63.0208% ( 340) 00:07:05.169 7763.495 - 7813.908: 64.9912% ( 314) 00:07:05.169 7813.908 - 7864.320: 66.8863% ( 302) 00:07:05.169 7864.320 - 7914.732: 68.6308% ( 278) 00:07:05.169 7914.732 - 7965.145: 70.0803% ( 231) 00:07:05.169 7965.145 - 8015.557: 71.3730% ( 206) 00:07:05.169 8015.557 - 8065.969: 72.4900% ( 178) 00:07:05.169 8065.969 - 8116.382: 73.2869% ( 127) 00:07:05.169 8116.382 - 8166.794: 74.1027% ( 130) 00:07:05.169 8166.794 - 8217.206: 74.9435% ( 134) 00:07:05.169 8217.206 - 8267.618: 75.6840% ( 118) 00:07:05.169 8267.618 - 8318.031: 76.5939% ( 145) 00:07:05.169 8318.031 - 8368.443: 77.3783% ( 125) 00:07:05.169 8368.443 - 8418.855: 77.9556% ( 92) 00:07:05.169 8418.855 - 8469.268: 78.4576% ( 80) 00:07:05.169 8469.268 - 8519.680: 79.0412% ( 93) 00:07:05.169 8519.680 - 8570.092: 79.7942% ( 120) 00:07:05.169 8570.092 - 8620.505: 80.4405% ( 103) 00:07:05.169 8620.505 - 8670.917: 81.0680% ( 100) 00:07:05.169 8670.917 - 8721.329: 81.5889% ( 83) 00:07:05.169 8721.329 - 8771.742: 82.2164% ( 100) 00:07:05.169 8771.742 - 8822.154: 82.7623% ( 87) 00:07:05.169 8822.154 - 8872.566: 83.3271% ( 90) 00:07:05.169 8872.566 - 8922.978: 83.8228% ( 79) 00:07:05.169 8922.978 - 8973.391: 84.2746% ( 72) 00:07:05.169 8973.391 - 9023.803: 84.7139% ( 70) 00:07:05.169 9023.803 - 9074.215: 85.0464% ( 53) 00:07:05.169 9074.215 - 9124.628: 85.4731% ( 68) 00:07:05.169 9124.628 - 9175.040: 85.8183% ( 55) 00:07:05.169 9175.040 - 9225.452: 86.3015% ( 77) 00:07:05.169 9225.452 - 9275.865: 86.6403% ( 54) 00:07:05.169 9275.865 - 9326.277: 87.0419% ( 64) 00:07:05.169 9326.277 - 9376.689: 87.4310% ( 62) 00:07:05.169 9376.689 - 9427.102: 87.9393% ( 81) 00:07:05.169 9427.102 - 9477.514: 88.4413% ( 80) 00:07:05.169 9477.514 - 9527.926: 88.8178% ( 60) 00:07:05.169 9527.926 - 9578.338: 89.2068% ( 62) 00:07:05.169 9578.338 - 9628.751: 89.5331% ( 52) 00:07:05.169 9628.751 - 9679.163: 89.8532% ( 51) 00:07:05.169 9679.163 - 9729.575: 90.3301% ( 76) 00:07:05.169 9729.575 - 9779.988: 90.6689% ( 54) 00:07:05.169 9779.988 - 9830.400: 90.9701% ( 48) 00:07:05.169 9830.400 - 9880.812: 91.2651% ( 47) 00:07:05.169 9880.812 - 9931.225: 91.5788% ( 50) 00:07:05.169 9931.225 - 9981.637: 91.8424% ( 42) 00:07:05.169 9981.637 - 10032.049: 92.1185% ( 44) 00:07:05.169 10032.049 - 10082.462: 92.3507% ( 37) 00:07:05.169 10082.462 - 10132.874: 92.5577% ( 33) 00:07:05.169 10132.874 - 10183.286: 92.8464% ( 46) 00:07:05.169 10183.286 - 10233.698: 93.0597% ( 34) 00:07:05.169 10233.698 - 10284.111: 93.3672% ( 49) 00:07:05.169 10284.111 - 10334.523: 93.5931% ( 36) 00:07:05.169 10334.523 - 10384.935: 93.8065% ( 34) 00:07:05.169 10384.935 - 10435.348: 93.9822% ( 28) 00:07:05.169 10435.348 - 10485.760: 94.1140% ( 21) 00:07:05.169 10485.760 - 10536.172: 94.2457% ( 21) 00:07:05.169 10536.172 - 10586.585: 94.3587% ( 18) 00:07:05.169 10586.585 - 10636.997: 94.4842% ( 20) 00:07:05.169 10636.997 - 10687.409: 94.6222% ( 22) 00:07:05.169 10687.409 - 10737.822: 94.7226% ( 16) 00:07:05.169 10737.822 - 10788.234: 94.7979% ( 12) 00:07:05.169 10788.234 - 10838.646: 94.8921% ( 15) 00:07:05.169 10838.646 - 10889.058: 94.9862% ( 15) 00:07:05.169 10889.058 - 10939.471: 95.0740% ( 14) 00:07:05.169 10939.471 - 10989.883: 95.1744% ( 16) 00:07:05.169 10989.883 - 11040.295: 95.2686% ( 15) 00:07:05.169 11040.295 - 11090.708: 95.3564% ( 14) 00:07:05.169 11090.708 - 11141.120: 95.4757% ( 19) 00:07:05.169 11141.120 - 11191.532: 95.5635% ( 14) 00:07:05.169 11191.532 - 11241.945: 95.6890% ( 20) 00:07:05.169 11241.945 - 11292.357: 95.7769% ( 14) 00:07:05.169 11292.357 - 11342.769: 95.8459% ( 11) 00:07:05.169 11342.769 - 11393.182: 95.9086% ( 10) 00:07:05.169 11393.182 - 11443.594: 95.9902% ( 13) 00:07:05.169 11443.594 - 11494.006: 96.0216% ( 5) 00:07:05.169 11494.006 - 11544.418: 96.1094% ( 14) 00:07:05.169 11544.418 - 11594.831: 96.1722% ( 10) 00:07:05.169 11594.831 - 11645.243: 96.2349% ( 10) 00:07:05.169 11645.243 - 11695.655: 96.2726% ( 6) 00:07:05.169 11695.655 - 11746.068: 96.3102% ( 6) 00:07:05.169 11746.068 - 11796.480: 96.3416% ( 5) 00:07:05.169 11796.480 - 11846.892: 96.3730% ( 5) 00:07:05.169 11846.892 - 11897.305: 96.4106% ( 6) 00:07:05.169 11897.305 - 11947.717: 96.4546% ( 7) 00:07:05.169 11947.717 - 11998.129: 96.5110% ( 9) 00:07:05.169 11998.129 - 12048.542: 96.5926% ( 13) 00:07:05.169 12048.542 - 12098.954: 96.6679% ( 12) 00:07:05.169 12098.954 - 12149.366: 96.7369% ( 11) 00:07:05.169 12149.366 - 12199.778: 96.8248% ( 14) 00:07:05.169 12199.778 - 12250.191: 96.9064% ( 13) 00:07:05.169 12250.191 - 12300.603: 96.9942% ( 14) 00:07:05.169 12300.603 - 12351.015: 97.1135% ( 19) 00:07:05.169 12351.015 - 12401.428: 97.1888% ( 12) 00:07:05.169 12401.428 - 12451.840: 97.2390% ( 8) 00:07:05.169 12451.840 - 12502.252: 97.3394% ( 16) 00:07:05.169 12502.252 - 12552.665: 97.3833% ( 7) 00:07:05.169 12552.665 - 12603.077: 97.4209% ( 6) 00:07:05.169 12603.077 - 12653.489: 97.4523% ( 5) 00:07:05.169 12653.489 - 12703.902: 97.4900% ( 6) 00:07:05.169 12703.902 - 12754.314: 97.5276% ( 6) 00:07:05.169 12754.314 - 12804.726: 97.5715% ( 7) 00:07:05.169 12804.726 - 12855.138: 97.6343% ( 10) 00:07:05.169 12855.138 - 12905.551: 97.7221% ( 14) 00:07:05.169 12905.551 - 13006.375: 97.8978% ( 28) 00:07:05.169 13006.375 - 13107.200: 97.9982% ( 16) 00:07:05.169 13107.200 - 13208.025: 98.0986% ( 16) 00:07:05.169 13208.025 - 13308.849: 98.2116% ( 18) 00:07:05.169 13308.849 - 13409.674: 98.3183% ( 17) 00:07:05.169 13409.674 - 13510.498: 98.4877% ( 27) 00:07:05.169 13510.498 - 13611.323: 98.6822% ( 31) 00:07:05.169 13611.323 - 13712.148: 98.7513% ( 11) 00:07:05.169 13712.148 - 13812.972: 98.8266% ( 12) 00:07:05.169 13812.972 - 13913.797: 98.9332% ( 17) 00:07:05.169 13913.797 - 14014.622: 99.0148% ( 13) 00:07:05.169 14014.622 - 14115.446: 99.0901% ( 12) 00:07:05.169 14115.446 - 14216.271: 99.1654% ( 12) 00:07:05.169 14216.271 - 14317.095: 99.1968% ( 5) 00:07:05.169 18047.606 - 18148.431: 99.2031% ( 1) 00:07:05.169 18148.431 - 18249.255: 99.2219% ( 3) 00:07:05.169 18249.255 - 18350.080: 99.2407% ( 3) 00:07:05.169 18350.080 - 18450.905: 99.2595% ( 3) 00:07:05.169 18450.905 - 18551.729: 99.2784% ( 3) 00:07:05.169 18551.729 - 18652.554: 99.2972% ( 3) 00:07:05.169 18652.554 - 18753.378: 99.3160% ( 3) 00:07:05.169 18753.378 - 18854.203: 99.3348% ( 3) 00:07:05.169 18854.203 - 18955.028: 99.3537% ( 3) 00:07:05.169 18955.028 - 19055.852: 99.3662% ( 2) 00:07:05.169 19055.852 - 19156.677: 99.3850% ( 3) 00:07:05.169 19156.677 - 19257.502: 99.4039% ( 3) 00:07:05.169 19257.502 - 19358.326: 99.4227% ( 3) 00:07:05.169 19358.326 - 19459.151: 99.4415% ( 3) 00:07:05.169 19459.151 - 19559.975: 99.4666% ( 4) 00:07:05.169 19559.975 - 19660.800: 99.4792% ( 2) 00:07:05.169 19660.800 - 19761.625: 99.5043% ( 4) 00:07:05.169 19761.625 - 19862.449: 99.5168% ( 2) 00:07:05.169 19862.449 - 19963.274: 99.5356% ( 3) 00:07:05.169 19963.274 - 20064.098: 99.5545% ( 3) 00:07:05.169 20064.098 - 20164.923: 99.5733% ( 3) 00:07:05.169 20164.923 - 20265.748: 99.5921% ( 3) 00:07:05.169 20265.748 - 20366.572: 99.5984% ( 1) 00:07:05.169 26617.698 - 26819.348: 99.6172% ( 3) 00:07:05.169 26819.348 - 27020.997: 99.6486% ( 5) 00:07:05.169 27020.997 - 27222.646: 99.6862% ( 6) 00:07:05.169 27222.646 - 27424.295: 99.7176% ( 5) 00:07:05.169 27424.295 - 27625.945: 99.7615% ( 7) 00:07:05.169 27625.945 - 27827.594: 99.7992% ( 6) 00:07:05.169 27827.594 - 28029.243: 99.8306% ( 5) 00:07:05.169 28029.243 - 28230.892: 99.8682% ( 6) 00:07:05.169 28230.892 - 28432.542: 99.9059% ( 6) 00:07:05.169 28432.542 - 28634.191: 99.9435% ( 6) 00:07:05.169 28634.191 - 28835.840: 99.9812% ( 6) 00:07:05.169 28835.840 - 29037.489: 100.0000% ( 3) 00:07:05.169 00:07:05.169 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:05.169 ============================================================================== 00:07:05.169 Range in us Cumulative IO count 00:07:05.169 4562.314 - 4587.520: 0.0126% ( 2) 00:07:05.169 4587.520 - 4612.726: 0.0377% ( 4) 00:07:05.169 4612.726 - 4637.932: 0.0565% ( 3) 00:07:05.169 4637.932 - 4663.138: 0.0816% ( 4) 00:07:05.169 4663.138 - 4688.345: 0.1067% ( 4) 00:07:05.169 4713.551 - 4738.757: 0.1192% ( 2) 00:07:05.169 4738.757 - 4763.963: 0.1318% ( 2) 00:07:05.169 4763.963 - 4789.169: 0.1506% ( 3) 00:07:05.169 4789.169 - 4814.375: 0.1569% ( 1) 00:07:05.169 4814.375 - 4839.582: 0.1820% ( 4) 00:07:05.169 4839.582 - 4864.788: 0.1883% ( 1) 00:07:05.169 4940.406 - 4965.612: 0.2071% ( 3) 00:07:05.169 4965.612 - 4990.818: 0.2196% ( 2) 00:07:05.169 4990.818 - 5016.025: 0.2447% ( 4) 00:07:05.169 5016.025 - 5041.231: 0.2761% ( 5) 00:07:05.169 5041.231 - 5066.437: 0.3075% ( 5) 00:07:05.169 5066.437 - 5091.643: 0.3326% ( 4) 00:07:05.169 5091.643 - 5116.849: 0.3640% ( 5) 00:07:05.169 5116.849 - 5142.055: 0.3765% ( 2) 00:07:05.169 5142.055 - 5167.262: 0.3953% ( 3) 00:07:05.169 5167.262 - 5192.468: 0.4016% ( 1) 00:07:05.169 6377.157 - 6402.363: 0.4079% ( 1) 00:07:05.169 6402.363 - 6427.569: 0.4330% ( 4) 00:07:05.169 6427.569 - 6452.775: 0.4644% ( 5) 00:07:05.169 6452.775 - 6503.188: 0.5648% ( 16) 00:07:05.169 6503.188 - 6553.600: 0.7279% ( 26) 00:07:05.169 6553.600 - 6604.012: 0.9852% ( 41) 00:07:05.169 6604.012 - 6654.425: 1.4997% ( 82) 00:07:05.169 6654.425 - 6704.837: 2.1775% ( 108) 00:07:05.169 6704.837 - 6755.249: 2.8928% ( 114) 00:07:05.169 6755.249 - 6805.662: 3.8906% ( 159) 00:07:05.169 6805.662 - 6856.074: 5.2774% ( 221) 00:07:05.169 6856.074 - 6906.486: 6.8650% ( 253) 00:07:05.169 6906.486 - 6956.898: 8.7224% ( 296) 00:07:05.169 6956.898 - 7007.311: 11.1320% ( 384) 00:07:05.169 7007.311 - 7057.723: 14.7214% ( 572) 00:07:05.169 7057.723 - 7108.135: 18.9445% ( 673) 00:07:05.169 7108.135 - 7158.548: 23.2053% ( 679) 00:07:05.169 7158.548 - 7208.960: 27.8489% ( 740) 00:07:05.169 7208.960 - 7259.372: 32.4485% ( 733) 00:07:05.169 7259.372 - 7309.785: 37.1925% ( 756) 00:07:05.169 7309.785 - 7360.197: 41.9490% ( 758) 00:07:05.170 7360.197 - 7410.609: 47.0005% ( 805) 00:07:05.170 7410.609 - 7461.022: 51.2927% ( 684) 00:07:05.170 7461.022 - 7511.434: 55.4782% ( 667) 00:07:05.170 7511.434 - 7561.846: 59.1365% ( 583) 00:07:05.170 7561.846 - 7612.258: 61.9666% ( 451) 00:07:05.170 7612.258 - 7662.671: 64.3763% ( 384) 00:07:05.170 7662.671 - 7713.083: 66.2337% ( 296) 00:07:05.170 7713.083 - 7763.495: 67.9154% ( 268) 00:07:05.170 7763.495 - 7813.908: 69.0261% ( 177) 00:07:05.170 7813.908 - 7864.320: 69.9674% ( 150) 00:07:05.170 7864.320 - 7914.732: 70.7392% ( 123) 00:07:05.170 7914.732 - 7965.145: 71.3542% ( 98) 00:07:05.170 7965.145 - 8015.557: 71.8562% ( 80) 00:07:05.170 8015.557 - 8065.969: 72.5966% ( 118) 00:07:05.170 8065.969 - 8116.382: 73.6885% ( 174) 00:07:05.170 8116.382 - 8166.794: 74.4352% ( 119) 00:07:05.170 8166.794 - 8217.206: 75.1569% ( 115) 00:07:05.170 8217.206 - 8267.618: 75.7279% ( 91) 00:07:05.170 8267.618 - 8318.031: 76.4056% ( 108) 00:07:05.170 8318.031 - 8368.443: 77.0771% ( 107) 00:07:05.170 8368.443 - 8418.855: 77.7234% ( 103) 00:07:05.170 8418.855 - 8469.268: 78.5015% ( 124) 00:07:05.170 8469.268 - 8519.680: 79.2733% ( 123) 00:07:05.170 8519.680 - 8570.092: 79.9260% ( 104) 00:07:05.170 8570.092 - 8620.505: 80.4844% ( 89) 00:07:05.170 8620.505 - 8670.917: 81.1810% ( 111) 00:07:05.170 8670.917 - 8721.329: 81.7395% ( 89) 00:07:05.170 8721.329 - 8771.742: 82.4046% ( 106) 00:07:05.170 8771.742 - 8822.154: 82.9882% ( 93) 00:07:05.170 8822.154 - 8872.566: 83.7412% ( 120) 00:07:05.170 8872.566 - 8922.978: 84.4252% ( 109) 00:07:05.170 8922.978 - 8973.391: 84.9523% ( 84) 00:07:05.170 8973.391 - 9023.803: 85.6112% ( 105) 00:07:05.170 9023.803 - 9074.215: 86.0191% ( 65) 00:07:05.170 9074.215 - 9124.628: 86.4583% ( 70) 00:07:05.170 9124.628 - 9175.040: 86.8035% ( 55) 00:07:05.170 9175.040 - 9225.452: 87.1109% ( 49) 00:07:05.170 9225.452 - 9275.865: 87.4812% ( 59) 00:07:05.170 9275.865 - 9326.277: 87.9267% ( 71) 00:07:05.170 9326.277 - 9376.689: 88.1463% ( 35) 00:07:05.170 9376.689 - 9427.102: 88.3409% ( 31) 00:07:05.170 9427.102 - 9477.514: 88.5040% ( 26) 00:07:05.170 9477.514 - 9527.926: 88.6170% ( 18) 00:07:05.170 9527.926 - 9578.338: 88.7613% ( 23) 00:07:05.170 9578.338 - 9628.751: 88.8993% ( 22) 00:07:05.170 9628.751 - 9679.163: 89.0688% ( 27) 00:07:05.170 9679.163 - 9729.575: 89.2947% ( 36) 00:07:05.170 9729.575 - 9779.988: 89.5394% ( 39) 00:07:05.170 9779.988 - 9830.400: 89.8281% ( 46) 00:07:05.170 9830.400 - 9880.812: 90.2485% ( 67) 00:07:05.170 9880.812 - 9931.225: 90.5120% ( 42) 00:07:05.170 9931.225 - 9981.637: 90.8133% ( 48) 00:07:05.170 9981.637 - 10032.049: 91.2274% ( 66) 00:07:05.170 10032.049 - 10082.462: 91.5851% ( 57) 00:07:05.170 10082.462 - 10132.874: 91.9302% ( 55) 00:07:05.170 10132.874 - 10183.286: 92.2816% ( 56) 00:07:05.170 10183.286 - 10233.698: 92.6142% ( 53) 00:07:05.170 10233.698 - 10284.111: 92.8715% ( 41) 00:07:05.170 10284.111 - 10334.523: 93.0848% ( 34) 00:07:05.170 10334.523 - 10384.935: 93.3045% ( 35) 00:07:05.170 10384.935 - 10435.348: 93.5115% ( 33) 00:07:05.170 10435.348 - 10485.760: 93.6998% ( 30) 00:07:05.170 10485.760 - 10536.172: 93.8630% ( 26) 00:07:05.170 10536.172 - 10586.585: 94.0261% ( 26) 00:07:05.170 10586.585 - 10636.997: 94.1893% ( 26) 00:07:05.170 10636.997 - 10687.409: 94.3148% ( 20) 00:07:05.170 10687.409 - 10737.822: 94.4214% ( 17) 00:07:05.170 10737.822 - 10788.234: 94.5971% ( 28) 00:07:05.170 10788.234 - 10838.646: 94.7540% ( 25) 00:07:05.170 10838.646 - 10889.058: 94.9423% ( 30) 00:07:05.170 10889.058 - 10939.471: 95.1556% ( 34) 00:07:05.170 10939.471 - 10989.883: 95.3062% ( 24) 00:07:05.170 10989.883 - 11040.295: 95.4380% ( 21) 00:07:05.170 11040.295 - 11090.708: 95.5761% ( 22) 00:07:05.170 11090.708 - 11141.120: 95.7078% ( 21) 00:07:05.170 11141.120 - 11191.532: 95.8082% ( 16) 00:07:05.170 11191.532 - 11241.945: 95.9212% ( 18) 00:07:05.170 11241.945 - 11292.357: 96.0279% ( 17) 00:07:05.170 11292.357 - 11342.769: 96.1345% ( 17) 00:07:05.170 11342.769 - 11393.182: 96.2287% ( 15) 00:07:05.170 11393.182 - 11443.594: 96.2977% ( 11) 00:07:05.170 11443.594 - 11494.006: 96.3981% ( 16) 00:07:05.170 11494.006 - 11544.418: 96.4734% ( 12) 00:07:05.170 11544.418 - 11594.831: 96.5550% ( 13) 00:07:05.170 11594.831 - 11645.243: 96.6428% ( 14) 00:07:05.170 11645.243 - 11695.655: 96.7432% ( 16) 00:07:05.170 11695.655 - 11746.068: 96.8248% ( 13) 00:07:05.170 11746.068 - 11796.480: 96.9127% ( 14) 00:07:05.170 11796.480 - 11846.892: 97.0131% ( 16) 00:07:05.170 11846.892 - 11897.305: 97.0821% ( 11) 00:07:05.170 11897.305 - 11947.717: 97.1323% ( 8) 00:07:05.170 11947.717 - 11998.129: 97.1888% ( 9) 00:07:05.170 11998.129 - 12048.542: 97.2515% ( 10) 00:07:05.170 12048.542 - 12098.954: 97.2954% ( 7) 00:07:05.170 12098.954 - 12149.366: 97.3268% ( 5) 00:07:05.170 12149.366 - 12199.778: 97.3519% ( 4) 00:07:05.170 12199.778 - 12250.191: 97.3896% ( 6) 00:07:05.170 12250.191 - 12300.603: 97.4272% ( 6) 00:07:05.170 12300.603 - 12351.015: 97.4837% ( 9) 00:07:05.170 12351.015 - 12401.428: 97.5653% ( 13) 00:07:05.170 12401.428 - 12451.840: 97.6782% ( 18) 00:07:05.170 12451.840 - 12502.252: 97.8727% ( 31) 00:07:05.170 12502.252 - 12552.665: 97.9229% ( 8) 00:07:05.170 12552.665 - 12603.077: 97.9669% ( 7) 00:07:05.170 12603.077 - 12653.489: 97.9731% ( 1) 00:07:05.170 12653.489 - 12703.902: 97.9857% ( 2) 00:07:05.170 12703.902 - 12754.314: 97.9920% ( 1) 00:07:05.170 13208.025 - 13308.849: 98.0359% ( 7) 00:07:05.170 13308.849 - 13409.674: 98.1802% ( 23) 00:07:05.170 13409.674 - 13510.498: 98.3747% ( 31) 00:07:05.170 13510.498 - 13611.323: 98.4814% ( 17) 00:07:05.170 13611.323 - 13712.148: 98.7764% ( 47) 00:07:05.170 13712.148 - 13812.972: 98.9583% ( 29) 00:07:05.170 13812.972 - 13913.797: 99.0713% ( 18) 00:07:05.170 13913.797 - 14014.622: 99.1529% ( 13) 00:07:05.170 14014.622 - 14115.446: 99.1968% ( 7) 00:07:05.170 18854.203 - 18955.028: 99.2093% ( 2) 00:07:05.170 18955.028 - 19055.852: 99.2407% ( 5) 00:07:05.170 19055.852 - 19156.677: 99.2595% ( 3) 00:07:05.170 19156.677 - 19257.502: 99.2909% ( 5) 00:07:05.170 19257.502 - 19358.326: 99.3160% ( 4) 00:07:05.170 19358.326 - 19459.151: 99.3474% ( 5) 00:07:05.170 19459.151 - 19559.975: 99.3725% ( 4) 00:07:05.170 19559.975 - 19660.800: 99.3976% ( 4) 00:07:05.170 19660.800 - 19761.625: 99.4352% ( 6) 00:07:05.170 19761.625 - 19862.449: 99.4603% ( 4) 00:07:05.170 19862.449 - 19963.274: 99.4917% ( 5) 00:07:05.170 19963.274 - 20064.098: 99.5043% ( 2) 00:07:05.170 20064.098 - 20164.923: 99.5231% ( 3) 00:07:05.170 20164.923 - 20265.748: 99.5419% ( 3) 00:07:05.170 20265.748 - 20366.572: 99.5607% ( 3) 00:07:05.170 20366.572 - 20467.397: 99.5733% ( 2) 00:07:05.170 20467.397 - 20568.222: 99.5921% ( 3) 00:07:05.170 20568.222 - 20669.046: 99.5984% ( 1) 00:07:05.170 26617.698 - 26819.348: 99.6109% ( 2) 00:07:05.170 26819.348 - 27020.997: 99.6298% ( 3) 00:07:05.170 27020.997 - 27222.646: 99.6486% ( 3) 00:07:05.170 27222.646 - 27424.295: 99.7678% ( 19) 00:07:05.170 27424.295 - 27625.945: 99.8808% ( 18) 00:07:05.170 27625.945 - 27827.594: 99.9184% ( 6) 00:07:05.170 27827.594 - 28029.243: 99.9561% ( 6) 00:07:05.170 28029.243 - 28230.892: 99.9874% ( 5) 00:07:05.170 28230.892 - 28432.542: 100.0000% ( 2) 00:07:05.170 00:07:05.170 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:05.170 ============================================================================== 00:07:05.170 Range in us Cumulative IO count 00:07:05.170 3780.923 - 3806.129: 0.0063% ( 1) 00:07:05.170 3881.748 - 3906.954: 0.0126% ( 1) 00:07:05.170 3906.954 - 3932.160: 0.0251% ( 2) 00:07:05.170 3932.160 - 3957.366: 0.0502% ( 4) 00:07:05.170 3957.366 - 3982.572: 0.0816% ( 5) 00:07:05.170 3982.572 - 4007.778: 0.1130% ( 5) 00:07:05.170 4007.778 - 4032.985: 0.1757% ( 10) 00:07:05.170 4032.985 - 4058.191: 0.2385% ( 10) 00:07:05.170 4058.191 - 4083.397: 0.2636% ( 4) 00:07:05.170 4083.397 - 4108.603: 0.2761% ( 2) 00:07:05.170 4108.603 - 4133.809: 0.2949% ( 3) 00:07:05.170 4133.809 - 4159.015: 0.3075% ( 2) 00:07:05.170 4159.015 - 4184.222: 0.3200% ( 2) 00:07:05.170 4184.222 - 4209.428: 0.3389% ( 3) 00:07:05.170 4209.428 - 4234.634: 0.3514% ( 2) 00:07:05.170 4234.634 - 4259.840: 0.3640% ( 2) 00:07:05.170 4259.840 - 4285.046: 0.3828% ( 3) 00:07:05.170 4285.046 - 4310.252: 0.3953% ( 2) 00:07:05.170 4310.252 - 4335.458: 0.4016% ( 1) 00:07:05.170 5999.065 - 6024.271: 0.4079% ( 1) 00:07:05.170 6125.095 - 6150.302: 0.4142% ( 1) 00:07:05.170 6150.302 - 6175.508: 0.4267% ( 2) 00:07:05.170 6175.508 - 6200.714: 0.4455% ( 3) 00:07:05.170 6200.714 - 6225.920: 0.4518% ( 1) 00:07:05.170 6225.920 - 6251.126: 0.4769% ( 4) 00:07:05.170 6251.126 - 6276.332: 0.5208% ( 7) 00:07:05.170 6276.332 - 6301.538: 0.5522% ( 5) 00:07:05.170 6301.538 - 6326.745: 0.6150% ( 10) 00:07:05.170 6326.745 - 6351.951: 0.6589% ( 7) 00:07:05.170 6351.951 - 6377.157: 0.6903% ( 5) 00:07:05.170 6377.157 - 6402.363: 0.7279% ( 6) 00:07:05.170 6402.363 - 6427.569: 0.7593% ( 5) 00:07:05.170 6427.569 - 6452.775: 0.7907% ( 5) 00:07:05.170 6452.775 - 6503.188: 0.9162% ( 20) 00:07:05.170 6503.188 - 6553.600: 1.1860% ( 43) 00:07:05.170 6553.600 - 6604.012: 1.5186% ( 53) 00:07:05.170 6604.012 - 6654.425: 1.9390% ( 67) 00:07:05.170 6654.425 - 6704.837: 2.4724% ( 85) 00:07:05.170 6704.837 - 6755.249: 3.2003% ( 116) 00:07:05.170 6755.249 - 6805.662: 4.2671% ( 170) 00:07:05.170 6805.662 - 6856.074: 5.5472% ( 204) 00:07:05.170 6856.074 - 6906.486: 7.3105% ( 281) 00:07:05.170 6906.486 - 6956.898: 9.6448% ( 372) 00:07:05.170 6956.898 - 7007.311: 12.0231% ( 379) 00:07:05.170 7007.311 - 7057.723: 14.8092% ( 444) 00:07:05.170 7057.723 - 7108.135: 18.4864% ( 586) 00:07:05.170 7108.135 - 7158.548: 22.9292% ( 708) 00:07:05.170 7158.548 - 7208.960: 27.5728% ( 740) 00:07:05.170 7208.960 - 7259.372: 31.8901% ( 688) 00:07:05.170 7259.372 - 7309.785: 36.4270% ( 723) 00:07:05.170 7309.785 - 7360.197: 41.6353% ( 830) 00:07:05.170 7360.197 - 7410.609: 46.5110% ( 777) 00:07:05.170 7410.609 - 7461.022: 50.9036% ( 700) 00:07:05.171 7461.022 - 7511.434: 54.7879% ( 619) 00:07:05.171 7511.434 - 7561.846: 58.0635% ( 522) 00:07:05.171 7561.846 - 7612.258: 61.0128% ( 470) 00:07:05.171 7612.258 - 7662.671: 63.4601% ( 390) 00:07:05.171 7662.671 - 7713.083: 65.2234% ( 281) 00:07:05.171 7713.083 - 7763.495: 67.0745% ( 295) 00:07:05.171 7763.495 - 7813.908: 68.8128% ( 277) 00:07:05.171 7813.908 - 7864.320: 70.4255% ( 257) 00:07:05.171 7864.320 - 7914.732: 71.4671% ( 166) 00:07:05.171 7914.732 - 7965.145: 72.4900% ( 163) 00:07:05.171 7965.145 - 8015.557: 73.1990% ( 113) 00:07:05.171 8015.557 - 8065.969: 73.8203% ( 99) 00:07:05.171 8065.969 - 8116.382: 74.6988% ( 140) 00:07:05.171 8116.382 - 8166.794: 75.3012% ( 96) 00:07:05.171 8166.794 - 8217.206: 75.8346% ( 85) 00:07:05.171 8217.206 - 8267.618: 76.2550% ( 67) 00:07:05.171 8267.618 - 8318.031: 76.8763% ( 99) 00:07:05.171 8318.031 - 8368.443: 77.3971% ( 83) 00:07:05.171 8368.443 - 8418.855: 77.9493% ( 88) 00:07:05.171 8418.855 - 8469.268: 78.3509% ( 64) 00:07:05.171 8469.268 - 8519.680: 78.8090% ( 73) 00:07:05.171 8519.680 - 8570.092: 79.3800% ( 91) 00:07:05.171 8570.092 - 8620.505: 80.0264% ( 103) 00:07:05.171 8620.505 - 8670.917: 80.5346% ( 81) 00:07:05.171 8670.917 - 8721.329: 81.0617% ( 84) 00:07:05.171 8721.329 - 8771.742: 81.8210% ( 121) 00:07:05.171 8771.742 - 8822.154: 82.3544% ( 85) 00:07:05.171 8822.154 - 8872.566: 82.9317% ( 92) 00:07:05.171 8872.566 - 8922.978: 83.7475% ( 130) 00:07:05.171 8922.978 - 8973.391: 84.2934% ( 87) 00:07:05.171 8973.391 - 9023.803: 84.8331% ( 86) 00:07:05.171 9023.803 - 9074.215: 85.5547% ( 115) 00:07:05.171 9074.215 - 9124.628: 86.0693% ( 82) 00:07:05.171 9124.628 - 9175.040: 86.6027% ( 85) 00:07:05.171 9175.040 - 9225.452: 86.9729% ( 59) 00:07:05.171 9225.452 - 9275.865: 87.4247% ( 72) 00:07:05.171 9275.865 - 9326.277: 87.7259% ( 48) 00:07:05.171 9326.277 - 9376.689: 87.9455% ( 35) 00:07:05.171 9376.689 - 9427.102: 88.2154% ( 43) 00:07:05.171 9427.102 - 9477.514: 88.5354% ( 51) 00:07:05.171 9477.514 - 9527.926: 88.7236% ( 30) 00:07:05.171 9527.926 - 9578.338: 88.8931% ( 27) 00:07:05.171 9578.338 - 9628.751: 89.1253% ( 37) 00:07:05.171 9628.751 - 9679.163: 89.3010% ( 28) 00:07:05.171 9679.163 - 9729.575: 89.5708% ( 43) 00:07:05.171 9729.575 - 9779.988: 89.8155% ( 39) 00:07:05.171 9779.988 - 9830.400: 90.1669% ( 56) 00:07:05.171 9830.400 - 9880.812: 90.5309% ( 58) 00:07:05.171 9880.812 - 9931.225: 90.9262% ( 63) 00:07:05.171 9931.225 - 9981.637: 91.3592% ( 69) 00:07:05.171 9981.637 - 10032.049: 91.7420% ( 61) 00:07:05.171 10032.049 - 10082.462: 92.0494% ( 49) 00:07:05.171 10082.462 - 10132.874: 92.4573% ( 65) 00:07:05.171 10132.874 - 10183.286: 92.8025% ( 55) 00:07:05.171 10183.286 - 10233.698: 93.1288% ( 52) 00:07:05.171 10233.698 - 10284.111: 93.3986% ( 43) 00:07:05.171 10284.111 - 10334.523: 93.6119% ( 34) 00:07:05.171 10334.523 - 10384.935: 93.7939% ( 29) 00:07:05.171 10384.935 - 10435.348: 93.9885% ( 31) 00:07:05.171 10435.348 - 10485.760: 94.1767% ( 30) 00:07:05.171 10485.760 - 10536.172: 94.3838% ( 33) 00:07:05.171 10536.172 - 10586.585: 94.5846% ( 32) 00:07:05.171 10586.585 - 10636.997: 94.7603% ( 28) 00:07:05.171 10636.997 - 10687.409: 94.9548% ( 31) 00:07:05.171 10687.409 - 10737.822: 95.1619% ( 33) 00:07:05.171 10737.822 - 10788.234: 95.2999% ( 22) 00:07:05.171 10788.234 - 10838.646: 95.4317% ( 21) 00:07:05.171 10838.646 - 10889.058: 95.5447% ( 18) 00:07:05.171 10889.058 - 10939.471: 95.6451% ( 16) 00:07:05.171 10939.471 - 10989.883: 95.7455% ( 16) 00:07:05.171 10989.883 - 11040.295: 95.8459% ( 16) 00:07:05.171 11040.295 - 11090.708: 95.9526% ( 17) 00:07:05.171 11090.708 - 11141.120: 96.0592% ( 17) 00:07:05.171 11141.120 - 11191.532: 96.1785% ( 19) 00:07:05.171 11191.532 - 11241.945: 96.2726% ( 15) 00:07:05.171 11241.945 - 11292.357: 96.3291% ( 9) 00:07:05.171 11292.357 - 11342.769: 96.3918% ( 10) 00:07:05.171 11342.769 - 11393.182: 96.4483% ( 9) 00:07:05.171 11393.182 - 11443.594: 96.5048% ( 9) 00:07:05.171 11443.594 - 11494.006: 96.5550% ( 8) 00:07:05.171 11494.006 - 11544.418: 96.6052% ( 8) 00:07:05.171 11544.418 - 11594.831: 96.7056% ( 16) 00:07:05.171 11594.831 - 11645.243: 96.8122% ( 17) 00:07:05.171 11645.243 - 11695.655: 96.9127% ( 16) 00:07:05.171 11695.655 - 11746.068: 96.9754% ( 10) 00:07:05.171 11746.068 - 11796.480: 97.0256% ( 8) 00:07:05.171 11796.480 - 11846.892: 97.0507% ( 4) 00:07:05.171 11846.892 - 11897.305: 97.1009% ( 8) 00:07:05.171 11897.305 - 11947.717: 97.1511% ( 8) 00:07:05.171 11947.717 - 11998.129: 97.2076% ( 9) 00:07:05.171 11998.129 - 12048.542: 97.2703% ( 10) 00:07:05.171 12048.542 - 12098.954: 97.3143% ( 7) 00:07:05.171 12098.954 - 12149.366: 97.3707% ( 9) 00:07:05.171 12149.366 - 12199.778: 97.4084% ( 6) 00:07:05.171 12199.778 - 12250.191: 97.4460% ( 6) 00:07:05.171 12250.191 - 12300.603: 97.4962% ( 8) 00:07:05.171 12300.603 - 12351.015: 97.5590% ( 10) 00:07:05.171 12351.015 - 12401.428: 97.5966% ( 6) 00:07:05.171 12401.428 - 12451.840: 97.6155% ( 3) 00:07:05.171 12451.840 - 12502.252: 97.6594% ( 7) 00:07:05.171 12502.252 - 12552.665: 97.6970% ( 6) 00:07:05.171 12552.665 - 12603.077: 97.7472% ( 8) 00:07:05.171 12603.077 - 12653.489: 97.8288% ( 13) 00:07:05.171 12653.489 - 12703.902: 97.8602% ( 5) 00:07:05.171 12703.902 - 12754.314: 97.8853% ( 4) 00:07:05.171 12754.314 - 12804.726: 97.8978% ( 2) 00:07:05.171 12804.726 - 12855.138: 97.9104% ( 2) 00:07:05.171 12855.138 - 12905.551: 97.9229% ( 2) 00:07:05.171 12905.551 - 13006.375: 97.9606% ( 6) 00:07:05.171 13006.375 - 13107.200: 97.9920% ( 5) 00:07:05.171 13107.200 - 13208.025: 98.0108% ( 3) 00:07:05.171 13208.025 - 13308.849: 98.0547% ( 7) 00:07:05.171 13308.849 - 13409.674: 98.1237% ( 11) 00:07:05.171 13409.674 - 13510.498: 98.2681% ( 23) 00:07:05.171 13510.498 - 13611.323: 98.3685% ( 16) 00:07:05.171 13611.323 - 13712.148: 98.4124% ( 7) 00:07:05.171 13712.148 - 13812.972: 98.4814% ( 11) 00:07:05.171 13812.972 - 13913.797: 98.5944% ( 18) 00:07:05.171 13913.797 - 14014.622: 98.7387% ( 23) 00:07:05.171 14014.622 - 14115.446: 98.9395% ( 32) 00:07:05.171 14115.446 - 14216.271: 99.0399% ( 16) 00:07:05.171 14216.271 - 14317.095: 99.1278% ( 14) 00:07:05.171 14317.095 - 14417.920: 99.1968% ( 11) 00:07:05.171 18652.554 - 18753.378: 99.2031% ( 1) 00:07:05.171 18753.378 - 18854.203: 99.2344% ( 5) 00:07:05.171 18854.203 - 18955.028: 99.2595% ( 4) 00:07:05.171 18955.028 - 19055.852: 99.2909% ( 5) 00:07:05.171 19055.852 - 19156.677: 99.3223% ( 5) 00:07:05.171 19156.677 - 19257.502: 99.3411% ( 3) 00:07:05.171 19257.502 - 19358.326: 99.3662% ( 4) 00:07:05.171 19358.326 - 19459.151: 99.3913% ( 4) 00:07:05.171 19459.151 - 19559.975: 99.4164% ( 4) 00:07:05.171 19559.975 - 19660.800: 99.4478% ( 5) 00:07:05.171 19660.800 - 19761.625: 99.4792% ( 5) 00:07:05.171 19761.625 - 19862.449: 99.5043% ( 4) 00:07:05.171 19862.449 - 19963.274: 99.5231% ( 3) 00:07:05.171 19963.274 - 20064.098: 99.5356% ( 2) 00:07:05.171 20064.098 - 20164.923: 99.5482% ( 2) 00:07:05.171 20164.923 - 20265.748: 99.5670% ( 3) 00:07:05.171 20265.748 - 20366.572: 99.5858% ( 3) 00:07:05.171 20366.572 - 20467.397: 99.5984% ( 2) 00:07:05.171 26416.049 - 26617.698: 99.6172% ( 3) 00:07:05.171 26617.698 - 26819.348: 99.6423% ( 4) 00:07:05.171 26819.348 - 27020.997: 99.6611% ( 3) 00:07:05.171 27020.997 - 27222.646: 99.7302% ( 11) 00:07:05.171 27222.646 - 27424.295: 99.7678% ( 6) 00:07:05.171 27424.295 - 27625.945: 99.7992% ( 5) 00:07:05.171 27625.945 - 27827.594: 99.8243% ( 4) 00:07:05.171 27827.594 - 28029.243: 99.8431% ( 3) 00:07:05.171 28029.243 - 28230.892: 99.9561% ( 18) 00:07:05.171 28230.892 - 28432.542: 100.0000% ( 7) 00:07:05.171 00:07:05.171 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:05.171 ============================================================================== 00:07:05.171 Range in us Cumulative IO count 00:07:05.171 3629.686 - 3654.892: 0.0063% ( 1) 00:07:05.171 3654.892 - 3680.098: 0.0502% ( 7) 00:07:05.171 3680.098 - 3705.305: 0.0816% ( 5) 00:07:05.171 3705.305 - 3730.511: 0.1506% ( 11) 00:07:05.171 3730.511 - 3755.717: 0.2134% ( 10) 00:07:05.171 3755.717 - 3780.923: 0.2573% ( 7) 00:07:05.171 3780.923 - 3806.129: 0.2698% ( 2) 00:07:05.172 3806.129 - 3831.335: 0.2824% ( 2) 00:07:05.172 3831.335 - 3856.542: 0.2949% ( 2) 00:07:05.172 3856.542 - 3881.748: 0.3075% ( 2) 00:07:05.172 3881.748 - 3906.954: 0.3200% ( 2) 00:07:05.172 3906.954 - 3932.160: 0.3389% ( 3) 00:07:05.172 3932.160 - 3957.366: 0.3514% ( 2) 00:07:05.172 3957.366 - 3982.572: 0.3640% ( 2) 00:07:05.172 3982.572 - 4007.778: 0.3765% ( 2) 00:07:05.172 4007.778 - 4032.985: 0.3953% ( 3) 00:07:05.172 4032.985 - 4058.191: 0.4016% ( 1) 00:07:05.172 5873.034 - 5898.240: 0.4142% ( 2) 00:07:05.172 5898.240 - 5923.446: 0.4330% ( 3) 00:07:05.172 5923.446 - 5948.652: 0.4455% ( 2) 00:07:05.172 5948.652 - 5973.858: 0.4706% ( 4) 00:07:05.172 5973.858 - 5999.065: 0.5083% ( 6) 00:07:05.172 5999.065 - 6024.271: 0.5522% ( 7) 00:07:05.172 6024.271 - 6049.477: 0.6150% ( 10) 00:07:05.172 6049.477 - 6074.683: 0.6652% ( 8) 00:07:05.172 6074.683 - 6099.889: 0.6903% ( 4) 00:07:05.172 6099.889 - 6125.095: 0.7091% ( 3) 00:07:05.172 6125.095 - 6150.302: 0.7216% ( 2) 00:07:05.172 6150.302 - 6175.508: 0.7342% ( 2) 00:07:05.172 6175.508 - 6200.714: 0.7530% ( 3) 00:07:05.172 6200.714 - 6225.920: 0.7656% ( 2) 00:07:05.172 6225.920 - 6251.126: 0.7781% ( 2) 00:07:05.172 6251.126 - 6276.332: 0.7969% ( 3) 00:07:05.172 6276.332 - 6301.538: 0.8032% ( 1) 00:07:05.172 6351.951 - 6377.157: 0.8095% ( 1) 00:07:05.172 6377.157 - 6402.363: 0.8346% ( 4) 00:07:05.172 6402.363 - 6427.569: 0.8911% ( 9) 00:07:05.172 6427.569 - 6452.775: 0.9413% ( 8) 00:07:05.172 6452.775 - 6503.188: 1.0793% ( 22) 00:07:05.172 6503.188 - 6553.600: 1.3366% ( 41) 00:07:05.172 6553.600 - 6604.012: 1.6378% ( 48) 00:07:05.172 6604.012 - 6654.425: 2.0520% ( 66) 00:07:05.172 6654.425 - 6704.837: 2.7548% ( 112) 00:07:05.172 6704.837 - 6755.249: 3.4701% ( 114) 00:07:05.172 6755.249 - 6805.662: 4.4302% ( 153) 00:07:05.172 6805.662 - 6856.074: 5.5911% ( 185) 00:07:05.172 6856.074 - 6906.486: 7.1787% ( 253) 00:07:05.172 6906.486 - 6956.898: 9.2056% ( 323) 00:07:05.172 6956.898 - 7007.311: 11.5274% ( 370) 00:07:05.172 7007.311 - 7057.723: 14.3512% ( 450) 00:07:05.172 7057.723 - 7108.135: 18.3484% ( 637) 00:07:05.172 7108.135 - 7158.548: 22.5402% ( 668) 00:07:05.172 7158.548 - 7208.960: 26.8637% ( 689) 00:07:05.172 7208.960 - 7259.372: 31.3692% ( 718) 00:07:05.172 7259.372 - 7309.785: 36.0442% ( 745) 00:07:05.172 7309.785 - 7360.197: 41.2149% ( 824) 00:07:05.172 7360.197 - 7410.609: 46.3291% ( 815) 00:07:05.172 7410.609 - 7461.022: 50.3765% ( 645) 00:07:05.172 7461.022 - 7511.434: 54.0412% ( 584) 00:07:05.172 7511.434 - 7561.846: 57.6744% ( 579) 00:07:05.172 7561.846 - 7612.258: 60.8622% ( 508) 00:07:05.172 7612.258 - 7662.671: 63.3785% ( 401) 00:07:05.172 7662.671 - 7713.083: 65.4493% ( 330) 00:07:05.172 7713.083 - 7763.495: 67.3695% ( 306) 00:07:05.172 7763.495 - 7813.908: 68.9069% ( 245) 00:07:05.172 7813.908 - 7864.320: 70.0991% ( 190) 00:07:05.172 7864.320 - 7914.732: 71.2663% ( 186) 00:07:05.172 7914.732 - 7965.145: 72.2703% ( 160) 00:07:05.172 7965.145 - 8015.557: 72.9920% ( 115) 00:07:05.172 8015.557 - 8065.969: 73.6760% ( 109) 00:07:05.172 8065.969 - 8116.382: 74.3913% ( 114) 00:07:05.172 8116.382 - 8166.794: 75.0565% ( 106) 00:07:05.172 8166.794 - 8217.206: 75.6463% ( 94) 00:07:05.172 8217.206 - 8267.618: 76.4433% ( 127) 00:07:05.172 8267.618 - 8318.031: 76.8951% ( 72) 00:07:05.172 8318.031 - 8368.443: 77.4285% ( 85) 00:07:05.172 8368.443 - 8418.855: 77.9179% ( 78) 00:07:05.172 8418.855 - 8469.268: 78.3948% ( 76) 00:07:05.172 8469.268 - 8519.680: 78.9282% ( 85) 00:07:05.172 8519.680 - 8570.092: 79.4365% ( 81) 00:07:05.172 8570.092 - 8620.505: 79.9260% ( 78) 00:07:05.172 8620.505 - 8670.917: 80.4405% ( 82) 00:07:05.172 8670.917 - 8721.329: 81.1684% ( 116) 00:07:05.172 8721.329 - 8771.742: 81.6955% ( 84) 00:07:05.172 8771.742 - 8822.154: 82.2917% ( 95) 00:07:05.172 8822.154 - 8872.566: 82.8062% ( 82) 00:07:05.172 8872.566 - 8922.978: 83.4275% ( 99) 00:07:05.172 8922.978 - 8973.391: 84.1742% ( 119) 00:07:05.172 8973.391 - 9023.803: 84.9084% ( 117) 00:07:05.172 9023.803 - 9074.215: 85.4982% ( 94) 00:07:05.172 9074.215 - 9124.628: 85.9124% ( 66) 00:07:05.172 9124.628 - 9175.040: 86.3642% ( 72) 00:07:05.172 9175.040 - 9225.452: 87.1298% ( 122) 00:07:05.172 9225.452 - 9275.865: 87.5377% ( 65) 00:07:05.172 9275.865 - 9326.277: 87.9706% ( 69) 00:07:05.172 9326.277 - 9376.689: 88.2844% ( 50) 00:07:05.172 9376.689 - 9427.102: 88.6546% ( 59) 00:07:05.172 9427.102 - 9477.514: 89.0499% ( 63) 00:07:05.172 9477.514 - 9527.926: 89.3198% ( 43) 00:07:05.172 9527.926 - 9578.338: 89.5708% ( 40) 00:07:05.172 9578.338 - 9628.751: 89.8783% ( 49) 00:07:05.172 9628.751 - 9679.163: 90.1042% ( 36) 00:07:05.172 9679.163 - 9729.575: 90.3238% ( 35) 00:07:05.172 9729.575 - 9779.988: 90.4932% ( 27) 00:07:05.172 9779.988 - 9830.400: 90.7380% ( 39) 00:07:05.172 9830.400 - 9880.812: 90.9890% ( 40) 00:07:05.172 9880.812 - 9931.225: 91.4094% ( 67) 00:07:05.172 9931.225 - 9981.637: 91.6855% ( 44) 00:07:05.172 9981.637 - 10032.049: 92.0243% ( 54) 00:07:05.172 10032.049 - 10082.462: 92.3381% ( 50) 00:07:05.172 10082.462 - 10132.874: 92.6581% ( 51) 00:07:05.172 10132.874 - 10183.286: 92.9970% ( 54) 00:07:05.172 10183.286 - 10233.698: 93.3107% ( 50) 00:07:05.172 10233.698 - 10284.111: 93.6433% ( 53) 00:07:05.172 10284.111 - 10334.523: 93.9383% ( 47) 00:07:05.172 10334.523 - 10384.935: 94.2206% ( 45) 00:07:05.172 10384.935 - 10435.348: 94.5407% ( 51) 00:07:05.172 10435.348 - 10485.760: 94.7289% ( 30) 00:07:05.172 10485.760 - 10536.172: 94.9046% ( 28) 00:07:05.172 10536.172 - 10586.585: 95.0740% ( 27) 00:07:05.172 10586.585 - 10636.997: 95.2058% ( 21) 00:07:05.172 10636.997 - 10687.409: 95.3502% ( 23) 00:07:05.172 10687.409 - 10737.822: 95.4506% ( 16) 00:07:05.172 10737.822 - 10788.234: 95.5384% ( 14) 00:07:05.172 10788.234 - 10838.646: 95.5949% ( 9) 00:07:05.172 10838.646 - 10889.058: 95.6514% ( 9) 00:07:05.172 10889.058 - 10939.471: 95.7016% ( 8) 00:07:05.172 10939.471 - 10989.883: 95.7831% ( 13) 00:07:05.172 10989.883 - 11040.295: 95.8647% ( 13) 00:07:05.172 11040.295 - 11090.708: 95.9526% ( 14) 00:07:05.172 11090.708 - 11141.120: 96.0090% ( 9) 00:07:05.172 11141.120 - 11191.532: 96.1094% ( 16) 00:07:05.172 11191.532 - 11241.945: 96.1847% ( 12) 00:07:05.172 11241.945 - 11292.357: 96.2475% ( 10) 00:07:05.172 11292.357 - 11342.769: 96.3040% ( 9) 00:07:05.172 11342.769 - 11393.182: 96.3604% ( 9) 00:07:05.172 11393.182 - 11443.594: 96.4044% ( 7) 00:07:05.172 11443.594 - 11494.006: 96.4232% ( 3) 00:07:05.172 11494.006 - 11544.418: 96.4608% ( 6) 00:07:05.172 11544.418 - 11594.831: 96.4922% ( 5) 00:07:05.172 11594.831 - 11645.243: 96.5801% ( 14) 00:07:05.172 11645.243 - 11695.655: 96.6742% ( 15) 00:07:05.172 11695.655 - 11746.068: 96.7809% ( 17) 00:07:05.172 11746.068 - 11796.480: 96.8436% ( 10) 00:07:05.172 11796.480 - 11846.892: 96.9001% ( 9) 00:07:05.172 11846.892 - 11897.305: 96.9691% ( 11) 00:07:05.172 11897.305 - 11947.717: 97.0131% ( 7) 00:07:05.172 11947.717 - 11998.129: 97.0633% ( 8) 00:07:05.172 11998.129 - 12048.542: 97.0946% ( 5) 00:07:05.172 12048.542 - 12098.954: 97.1386% ( 7) 00:07:05.172 12098.954 - 12149.366: 97.1637% ( 4) 00:07:05.172 12149.366 - 12199.778: 97.2013% ( 6) 00:07:05.172 12199.778 - 12250.191: 97.2641% ( 10) 00:07:05.172 12250.191 - 12300.603: 97.3143% ( 8) 00:07:05.172 12300.603 - 12351.015: 97.3770% ( 10) 00:07:05.172 12351.015 - 12401.428: 97.4272% ( 8) 00:07:05.172 12401.428 - 12451.840: 97.4649% ( 6) 00:07:05.172 12451.840 - 12502.252: 97.5025% ( 6) 00:07:05.172 12502.252 - 12552.665: 97.5464% ( 7) 00:07:05.172 12552.665 - 12603.077: 97.5841% ( 6) 00:07:05.172 12603.077 - 12653.489: 97.6217% ( 6) 00:07:05.172 12653.489 - 12703.902: 97.6719% ( 8) 00:07:05.172 12703.902 - 12754.314: 97.7472% ( 12) 00:07:05.172 12754.314 - 12804.726: 97.8539% ( 17) 00:07:05.172 12804.726 - 12855.138: 97.9606% ( 17) 00:07:05.172 12855.138 - 12905.551: 98.0296% ( 11) 00:07:05.172 12905.551 - 13006.375: 98.1237% ( 15) 00:07:05.172 13006.375 - 13107.200: 98.2116% ( 14) 00:07:05.172 13107.200 - 13208.025: 98.2806% ( 11) 00:07:05.172 13208.025 - 13308.849: 98.3434% ( 10) 00:07:05.172 13308.849 - 13409.674: 98.4124% ( 11) 00:07:05.172 13409.674 - 13510.498: 98.4752% ( 10) 00:07:05.172 13510.498 - 13611.323: 98.5316% ( 9) 00:07:05.172 13611.323 - 13712.148: 98.5881% ( 9) 00:07:05.172 13712.148 - 13812.972: 98.6258% ( 6) 00:07:05.172 13812.972 - 13913.797: 98.7011% ( 12) 00:07:05.172 13913.797 - 14014.622: 98.8077% ( 17) 00:07:05.172 14014.622 - 14115.446: 98.9897% ( 29) 00:07:05.172 14115.446 - 14216.271: 99.0964% ( 17) 00:07:05.172 14216.271 - 14317.095: 99.1340% ( 6) 00:07:05.172 14317.095 - 14417.920: 99.1780% ( 7) 00:07:05.172 14417.920 - 14518.745: 99.1968% ( 3) 00:07:05.172 18753.378 - 18854.203: 99.2093% ( 2) 00:07:05.172 18854.203 - 18955.028: 99.2344% ( 4) 00:07:05.172 18955.028 - 19055.852: 99.2595% ( 4) 00:07:05.172 19055.852 - 19156.677: 99.2846% ( 4) 00:07:05.172 19156.677 - 19257.502: 99.3097% ( 4) 00:07:05.172 19257.502 - 19358.326: 99.3348% ( 4) 00:07:05.172 19358.326 - 19459.151: 99.3662% ( 5) 00:07:05.172 19459.151 - 19559.975: 99.3913% ( 4) 00:07:05.172 19559.975 - 19660.800: 99.4164% ( 4) 00:07:05.172 19660.800 - 19761.625: 99.4478% ( 5) 00:07:05.172 19761.625 - 19862.449: 99.4729% ( 4) 00:07:05.172 19862.449 - 19963.274: 99.4854% ( 2) 00:07:05.172 19963.274 - 20064.098: 99.5043% ( 3) 00:07:05.172 20064.098 - 20164.923: 99.5231% ( 3) 00:07:05.172 20164.923 - 20265.748: 99.5419% ( 3) 00:07:05.172 20265.748 - 20366.572: 99.5607% ( 3) 00:07:05.172 20366.572 - 20467.397: 99.5796% ( 3) 00:07:05.172 20467.397 - 20568.222: 99.5984% ( 3) 00:07:05.172 26214.400 - 26416.049: 99.6862% ( 14) 00:07:05.172 26416.049 - 26617.698: 99.7866% ( 16) 00:07:05.172 26819.348 - 27020.997: 99.8368% ( 8) 00:07:05.172 27020.997 - 27222.646: 99.8996% ( 10) 00:07:05.172 27222.646 - 27424.295: 99.9749% ( 12) 00:07:05.172 27424.295 - 27625.945: 100.0000% ( 4) 00:07:05.172 00:07:05.172 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:05.173 ============================================================================== 00:07:05.173 Range in us Cumulative IO count 00:07:05.173 3377.625 - 3402.831: 0.0063% ( 1) 00:07:05.173 3402.831 - 3428.037: 0.0126% ( 1) 00:07:05.173 3428.037 - 3453.243: 0.0314% ( 3) 00:07:05.173 3453.243 - 3478.449: 0.0753% ( 7) 00:07:05.173 3478.449 - 3503.655: 0.1569% ( 13) 00:07:05.173 3503.655 - 3528.862: 0.2385% ( 13) 00:07:05.173 3528.862 - 3554.068: 0.2636% ( 4) 00:07:05.173 3554.068 - 3579.274: 0.2761% ( 2) 00:07:05.173 3579.274 - 3604.480: 0.2887% ( 2) 00:07:05.173 3604.480 - 3629.686: 0.3012% ( 2) 00:07:05.173 3629.686 - 3654.892: 0.3138% ( 2) 00:07:05.173 3654.892 - 3680.098: 0.3263% ( 2) 00:07:05.173 3680.098 - 3705.305: 0.3389% ( 2) 00:07:05.173 3705.305 - 3730.511: 0.3514% ( 2) 00:07:05.173 3730.511 - 3755.717: 0.3702% ( 3) 00:07:05.173 3755.717 - 3780.923: 0.3828% ( 2) 00:07:05.173 3780.923 - 3806.129: 0.3953% ( 2) 00:07:05.173 3806.129 - 3831.335: 0.4016% ( 1) 00:07:05.173 5696.591 - 5721.797: 0.4079% ( 1) 00:07:05.173 5721.797 - 5747.003: 0.4330% ( 4) 00:07:05.173 5747.003 - 5772.209: 0.4644% ( 5) 00:07:05.173 5772.209 - 5797.415: 0.5208% ( 9) 00:07:05.173 5797.415 - 5822.622: 0.6087% ( 14) 00:07:05.173 5822.622 - 5847.828: 0.6777% ( 11) 00:07:05.173 5847.828 - 5873.034: 0.6903% ( 2) 00:07:05.173 5873.034 - 5898.240: 0.7028% ( 2) 00:07:05.173 5898.240 - 5923.446: 0.7154% ( 2) 00:07:05.173 5923.446 - 5948.652: 0.7279% ( 2) 00:07:05.173 5948.652 - 5973.858: 0.7467% ( 3) 00:07:05.173 5973.858 - 5999.065: 0.7593% ( 2) 00:07:05.173 5999.065 - 6024.271: 0.7718% ( 2) 00:07:05.173 6024.271 - 6049.477: 0.7844% ( 2) 00:07:05.173 6049.477 - 6074.683: 0.8032% ( 3) 00:07:05.173 6351.951 - 6377.157: 0.8220% ( 3) 00:07:05.173 6377.157 - 6402.363: 0.8409% ( 3) 00:07:05.173 6402.363 - 6427.569: 0.8722% ( 5) 00:07:05.173 6427.569 - 6452.775: 0.9475% ( 12) 00:07:05.173 6452.775 - 6503.188: 1.1295% ( 29) 00:07:05.173 6503.188 - 6553.600: 1.4056% ( 44) 00:07:05.173 6553.600 - 6604.012: 1.7382% ( 53) 00:07:05.173 6604.012 - 6654.425: 2.2465% ( 81) 00:07:05.173 6654.425 - 6704.837: 2.8991% ( 104) 00:07:05.173 6704.837 - 6755.249: 3.5141% ( 98) 00:07:05.173 6755.249 - 6805.662: 4.3361% ( 131) 00:07:05.173 6805.662 - 6856.074: 5.4719% ( 181) 00:07:05.173 6856.074 - 6906.486: 7.0156% ( 246) 00:07:05.173 6906.486 - 6956.898: 9.1867% ( 346) 00:07:05.173 6956.898 - 7007.311: 11.5901% ( 383) 00:07:05.173 7007.311 - 7057.723: 14.4327% ( 453) 00:07:05.173 7057.723 - 7108.135: 17.7774% ( 533) 00:07:05.173 7108.135 - 7158.548: 22.2076% ( 706) 00:07:05.173 7158.548 - 7208.960: 26.7445% ( 723) 00:07:05.173 7208.960 - 7259.372: 31.6516% ( 782) 00:07:05.173 7259.372 - 7309.785: 36.3579% ( 750) 00:07:05.173 7309.785 - 7360.197: 41.4847% ( 817) 00:07:05.173 7360.197 - 7410.609: 46.1471% ( 743) 00:07:05.173 7410.609 - 7461.022: 50.4581% ( 687) 00:07:05.173 7461.022 - 7511.434: 54.3800% ( 625) 00:07:05.173 7511.434 - 7561.846: 57.7999% ( 545) 00:07:05.173 7561.846 - 7612.258: 60.6677% ( 457) 00:07:05.173 7612.258 - 7662.671: 63.0961% ( 387) 00:07:05.173 7662.671 - 7713.083: 65.2548% ( 344) 00:07:05.173 7713.083 - 7763.495: 67.0306% ( 283) 00:07:05.173 7763.495 - 7813.908: 68.4174% ( 221) 00:07:05.173 7813.908 - 7864.320: 69.5783% ( 185) 00:07:05.173 7864.320 - 7914.732: 70.6953% ( 178) 00:07:05.173 7914.732 - 7965.145: 71.7934% ( 175) 00:07:05.173 7965.145 - 8015.557: 72.9355% ( 182) 00:07:05.173 8015.557 - 8065.969: 73.7262% ( 126) 00:07:05.173 8065.969 - 8116.382: 74.4478% ( 115) 00:07:05.173 8116.382 - 8166.794: 75.0314% ( 93) 00:07:05.173 8166.794 - 8217.206: 75.6714% ( 102) 00:07:05.173 8217.206 - 8267.618: 76.4119% ( 118) 00:07:05.173 8267.618 - 8318.031: 77.0206% ( 97) 00:07:05.173 8318.031 - 8368.443: 77.4473% ( 68) 00:07:05.173 8368.443 - 8418.855: 77.9556% ( 81) 00:07:05.173 8418.855 - 8469.268: 78.4011% ( 71) 00:07:05.173 8469.268 - 8519.680: 78.8843% ( 77) 00:07:05.173 8519.680 - 8570.092: 79.3361% ( 72) 00:07:05.173 8570.092 - 8620.505: 79.9762% ( 102) 00:07:05.173 8620.505 - 8670.917: 80.6162% ( 102) 00:07:05.173 8670.917 - 8721.329: 81.0429% ( 68) 00:07:05.173 8721.329 - 8771.742: 81.5700% ( 84) 00:07:05.173 8771.742 - 8822.154: 82.1034% ( 85) 00:07:05.173 8822.154 - 8872.566: 82.6744% ( 91) 00:07:05.173 8872.566 - 8922.978: 83.2580% ( 93) 00:07:05.173 8922.978 - 8973.391: 83.6847% ( 68) 00:07:05.173 8973.391 - 9023.803: 84.1114% ( 68) 00:07:05.173 9023.803 - 9074.215: 84.6009% ( 78) 00:07:05.173 9074.215 - 9124.628: 85.1343% ( 85) 00:07:05.173 9124.628 - 9175.040: 85.8245% ( 110) 00:07:05.173 9175.040 - 9225.452: 86.5587% ( 117) 00:07:05.173 9225.452 - 9275.865: 87.0168% ( 73) 00:07:05.173 9275.865 - 9326.277: 87.5000% ( 77) 00:07:05.173 9326.277 - 9376.689: 87.9832% ( 77) 00:07:05.173 9376.689 - 9427.102: 88.5856% ( 96) 00:07:05.173 9427.102 - 9477.514: 89.1253% ( 86) 00:07:05.173 9477.514 - 9527.926: 89.7653% ( 102) 00:07:05.173 9527.926 - 9578.338: 90.1857% ( 67) 00:07:05.173 9578.338 - 9628.751: 90.6313% ( 71) 00:07:05.173 9628.751 - 9679.163: 90.9639% ( 53) 00:07:05.173 9679.163 - 9729.575: 91.2964% ( 53) 00:07:05.173 9729.575 - 9779.988: 91.6980% ( 64) 00:07:05.173 9779.988 - 9830.400: 91.9804% ( 45) 00:07:05.173 9830.400 - 9880.812: 92.2440% ( 42) 00:07:05.173 9880.812 - 9931.225: 92.4511% ( 33) 00:07:05.173 9931.225 - 9981.637: 92.6644% ( 34) 00:07:05.173 9981.637 - 10032.049: 92.8840% ( 35) 00:07:05.173 10032.049 - 10082.462: 93.1288% ( 39) 00:07:05.173 10082.462 - 10132.874: 93.3798% ( 40) 00:07:05.173 10132.874 - 10183.286: 93.6496% ( 43) 00:07:05.173 10183.286 - 10233.698: 93.8881% ( 38) 00:07:05.173 10233.698 - 10284.111: 94.1140% ( 36) 00:07:05.173 10284.111 - 10334.523: 94.3085% ( 31) 00:07:05.173 10334.523 - 10384.935: 94.5407% ( 37) 00:07:05.173 10384.935 - 10435.348: 94.7854% ( 39) 00:07:05.173 10435.348 - 10485.760: 95.0113% ( 36) 00:07:05.173 10485.760 - 10536.172: 95.1870% ( 28) 00:07:05.173 10536.172 - 10586.585: 95.2686% ( 13) 00:07:05.173 10586.585 - 10636.997: 95.3502% ( 13) 00:07:05.173 10636.997 - 10687.409: 95.4443% ( 15) 00:07:05.173 10687.409 - 10737.822: 95.5321% ( 14) 00:07:05.173 10737.822 - 10788.234: 95.5761% ( 7) 00:07:05.173 10788.234 - 10838.646: 95.6137% ( 6) 00:07:05.173 10838.646 - 10889.058: 95.6451% ( 5) 00:07:05.173 10889.058 - 10939.471: 95.6765% ( 5) 00:07:05.173 10939.471 - 10989.883: 95.7267% ( 8) 00:07:05.173 10989.883 - 11040.295: 95.8020% ( 12) 00:07:05.173 11040.295 - 11090.708: 95.8773% ( 12) 00:07:05.173 11090.708 - 11141.120: 96.0969% ( 35) 00:07:05.173 11141.120 - 11191.532: 96.1722% ( 12) 00:07:05.173 11191.532 - 11241.945: 96.2287% ( 9) 00:07:05.173 11241.945 - 11292.357: 96.2851% ( 9) 00:07:05.173 11292.357 - 11342.769: 96.3353% ( 8) 00:07:05.173 11342.769 - 11393.182: 96.3667% ( 5) 00:07:05.173 11393.182 - 11443.594: 96.3855% ( 3) 00:07:05.173 11594.831 - 11645.243: 96.3981% ( 2) 00:07:05.173 11645.243 - 11695.655: 96.4106% ( 2) 00:07:05.173 11695.655 - 11746.068: 96.4295% ( 3) 00:07:05.173 11796.480 - 11846.892: 96.4420% ( 2) 00:07:05.173 11846.892 - 11897.305: 96.4546% ( 2) 00:07:05.173 11897.305 - 11947.717: 96.4671% ( 2) 00:07:05.173 11947.717 - 11998.129: 96.4985% ( 5) 00:07:05.173 11998.129 - 12048.542: 96.5863% ( 14) 00:07:05.173 12048.542 - 12098.954: 96.6616% ( 12) 00:07:05.173 12098.954 - 12149.366: 96.7495% ( 14) 00:07:05.173 12149.366 - 12199.778: 96.8876% ( 22) 00:07:05.173 12199.778 - 12250.191: 97.0131% ( 20) 00:07:05.173 12250.191 - 12300.603: 97.1637% ( 24) 00:07:05.173 12300.603 - 12351.015: 97.2954% ( 21) 00:07:05.173 12351.015 - 12401.428: 97.3896% ( 15) 00:07:05.173 12401.428 - 12451.840: 97.4649% ( 12) 00:07:05.173 12451.840 - 12502.252: 97.5339% ( 11) 00:07:05.173 12502.252 - 12552.665: 97.6029% ( 11) 00:07:05.173 12552.665 - 12603.077: 97.6594% ( 9) 00:07:05.173 12603.077 - 12653.489: 97.7221% ( 10) 00:07:05.173 12653.489 - 12703.902: 97.7974% ( 12) 00:07:05.173 12703.902 - 12754.314: 97.8665% ( 11) 00:07:05.173 12754.314 - 12804.726: 97.9292% ( 10) 00:07:05.173 12804.726 - 12855.138: 97.9794% ( 8) 00:07:05.173 12855.138 - 12905.551: 98.0233% ( 7) 00:07:05.173 12905.551 - 13006.375: 98.1363% ( 18) 00:07:05.173 13006.375 - 13107.200: 98.2304% ( 15) 00:07:05.173 13107.200 - 13208.025: 98.2994% ( 11) 00:07:05.173 13208.025 - 13308.849: 98.4249% ( 20) 00:07:05.173 13308.849 - 13409.674: 98.6069% ( 29) 00:07:05.173 13409.674 - 13510.498: 98.7199% ( 18) 00:07:05.173 13510.498 - 13611.323: 98.8266% ( 17) 00:07:05.173 13611.323 - 13712.148: 98.9332% ( 17) 00:07:05.173 13712.148 - 13812.972: 99.0085% ( 12) 00:07:05.173 13812.972 - 13913.797: 99.0713% ( 10) 00:07:05.173 13913.797 - 14014.622: 99.0901% ( 3) 00:07:05.173 14014.622 - 14115.446: 99.1027% ( 2) 00:07:05.173 14115.446 - 14216.271: 99.1215% ( 3) 00:07:05.173 14216.271 - 14317.095: 99.1403% ( 3) 00:07:05.173 14317.095 - 14417.920: 99.1654% ( 4) 00:07:05.173 14417.920 - 14518.745: 99.1905% ( 4) 00:07:05.173 14518.745 - 14619.569: 99.1968% ( 1) 00:07:05.173 18652.554 - 18753.378: 99.2156% ( 3) 00:07:05.173 18753.378 - 18854.203: 99.2470% ( 5) 00:07:05.173 18854.203 - 18955.028: 99.2721% ( 4) 00:07:05.173 18955.028 - 19055.852: 99.2909% ( 3) 00:07:05.173 19055.852 - 19156.677: 99.3097% ( 3) 00:07:05.173 19156.677 - 19257.502: 99.3411% ( 5) 00:07:05.173 19257.502 - 19358.326: 99.3599% ( 3) 00:07:05.173 19358.326 - 19459.151: 99.3850% ( 4) 00:07:05.173 19459.151 - 19559.975: 99.4039% ( 3) 00:07:05.173 19559.975 - 19660.800: 99.4227% ( 3) 00:07:05.173 19660.800 - 19761.625: 99.4352% ( 2) 00:07:05.173 19862.449 - 19963.274: 99.4478% ( 2) 00:07:05.173 19963.274 - 20064.098: 99.4666% ( 3) 00:07:05.173 20064.098 - 20164.923: 99.4917% ( 4) 00:07:05.173 20164.923 - 20265.748: 99.5043% ( 2) 00:07:05.173 20265.748 - 20366.572: 99.5294% ( 4) 00:07:05.173 20366.572 - 20467.397: 99.5545% ( 4) 00:07:05.173 20467.397 - 20568.222: 99.5670% ( 2) 00:07:05.173 20568.222 - 20669.046: 99.5921% ( 4) 00:07:05.174 20669.046 - 20769.871: 99.5984% ( 1) 00:07:05.174 25306.978 - 25407.803: 99.6109% ( 2) 00:07:05.174 25407.803 - 25508.628: 99.6988% ( 14) 00:07:05.174 25508.628 - 25609.452: 99.7051% ( 1) 00:07:05.174 25811.102 - 26012.751: 99.7302% ( 4) 00:07:05.174 26012.751 - 26214.400: 99.7866% ( 9) 00:07:05.174 26214.400 - 26416.049: 99.9059% ( 19) 00:07:05.174 26416.049 - 26617.698: 100.0000% ( 15) 00:07:05.174 00:07:05.174 10:38:25 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:05.174 00:07:05.174 real 0m2.449s 00:07:05.174 user 0m2.157s 00:07:05.174 sys 0m0.186s 00:07:05.174 10:38:25 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.174 ************************************ 00:07:05.174 END TEST nvme_perf 00:07:05.174 ************************************ 00:07:05.174 10:38:25 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:05.174 10:38:25 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:05.174 10:38:25 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:05.174 10:38:25 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.174 10:38:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:05.174 ************************************ 00:07:05.174 START TEST nvme_hello_world 00:07:05.174 ************************************ 00:07:05.174 10:38:25 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:05.174 Initializing NVMe Controllers 00:07:05.174 Attached to 0000:00:13.0 00:07:05.174 Namespace ID: 1 size: 1GB 00:07:05.174 Attached to 0000:00:10.0 00:07:05.174 Namespace ID: 1 size: 6GB 00:07:05.174 Attached to 0000:00:11.0 00:07:05.174 Namespace ID: 1 size: 5GB 00:07:05.174 Attached to 0000:00:12.0 00:07:05.174 Namespace ID: 1 size: 4GB 00:07:05.174 Namespace ID: 2 size: 4GB 00:07:05.174 Namespace ID: 3 size: 4GB 00:07:05.174 Initialization complete. 00:07:05.174 INFO: using host memory buffer for IO 00:07:05.174 Hello world! 00:07:05.174 INFO: using host memory buffer for IO 00:07:05.174 Hello world! 00:07:05.174 INFO: using host memory buffer for IO 00:07:05.174 Hello world! 00:07:05.174 INFO: using host memory buffer for IO 00:07:05.174 Hello world! 00:07:05.174 INFO: using host memory buffer for IO 00:07:05.174 Hello world! 00:07:05.174 INFO: using host memory buffer for IO 00:07:05.174 Hello world! 00:07:05.174 00:07:05.174 real 0m0.182s 00:07:05.174 user 0m0.063s 00:07:05.174 sys 0m0.074s 00:07:05.174 10:38:25 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.174 10:38:25 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:05.174 ************************************ 00:07:05.174 END TEST nvme_hello_world 00:07:05.174 ************************************ 00:07:05.174 10:38:25 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:05.174 10:38:25 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.174 10:38:25 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.174 10:38:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:05.174 ************************************ 00:07:05.174 START TEST nvme_sgl 00:07:05.174 ************************************ 00:07:05.174 10:38:25 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:05.432 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:05.432 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:05.432 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:05.432 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:05.432 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:05.432 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:05.432 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:05.432 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:05.432 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:05.432 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:05.432 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:05.432 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:05.432 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:05.432 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:05.432 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:05.432 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:05.432 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:05.432 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:05.432 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:05.432 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:05.432 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:05.432 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:05.432 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:05.432 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:05.432 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:05.432 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:05.432 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:05.432 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:05.432 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:05.432 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:05.432 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:05.432 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:05.432 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:05.432 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:05.432 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:05.432 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:05.432 NVMe Readv/Writev Request test 00:07:05.432 Attached to 0000:00:13.0 00:07:05.432 Attached to 0000:00:10.0 00:07:05.432 Attached to 0000:00:11.0 00:07:05.432 Attached to 0000:00:12.0 00:07:05.432 0000:00:10.0: build_io_request_2 test passed 00:07:05.432 0000:00:10.0: build_io_request_4 test passed 00:07:05.432 0000:00:10.0: build_io_request_5 test passed 00:07:05.432 0000:00:10.0: build_io_request_6 test passed 00:07:05.432 0000:00:10.0: build_io_request_7 test passed 00:07:05.432 0000:00:10.0: build_io_request_10 test passed 00:07:05.432 0000:00:11.0: build_io_request_2 test passed 00:07:05.432 0000:00:11.0: build_io_request_4 test passed 00:07:05.432 0000:00:11.0: build_io_request_5 test passed 00:07:05.432 0000:00:11.0: build_io_request_6 test passed 00:07:05.432 0000:00:11.0: build_io_request_7 test passed 00:07:05.432 0000:00:11.0: build_io_request_10 test passed 00:07:05.432 Cleaning up... 00:07:05.432 00:07:05.432 real 0m0.229s 00:07:05.432 user 0m0.108s 00:07:05.432 sys 0m0.084s 00:07:05.432 10:38:25 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.432 10:38:25 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:05.432 ************************************ 00:07:05.432 END TEST nvme_sgl 00:07:05.432 ************************************ 00:07:05.432 10:38:25 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:05.432 10:38:25 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.432 10:38:25 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.432 10:38:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:05.432 ************************************ 00:07:05.432 START TEST nvme_e2edp 00:07:05.432 ************************************ 00:07:05.432 10:38:25 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:05.698 NVMe Write/Read with End-to-End data protection test 00:07:05.698 Attached to 0000:00:13.0 00:07:05.698 Attached to 0000:00:10.0 00:07:05.698 Attached to 0000:00:11.0 00:07:05.698 Attached to 0000:00:12.0 00:07:05.698 Cleaning up... 00:07:05.698 00:07:05.698 real 0m0.185s 00:07:05.698 user 0m0.061s 00:07:05.698 sys 0m0.078s 00:07:05.698 10:38:26 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.698 ************************************ 00:07:05.698 END TEST nvme_e2edp 00:07:05.698 10:38:26 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:05.698 ************************************ 00:07:05.698 10:38:26 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:05.698 10:38:26 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.698 10:38:26 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.698 10:38:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:05.698 ************************************ 00:07:05.698 START TEST nvme_reserve 00:07:05.698 ************************************ 00:07:05.698 10:38:26 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:05.996 ===================================================== 00:07:05.996 NVMe Controller at PCI bus 0, device 19, function 0 00:07:05.996 ===================================================== 00:07:05.996 Reservations: Not Supported 00:07:05.996 ===================================================== 00:07:05.996 NVMe Controller at PCI bus 0, device 16, function 0 00:07:05.996 ===================================================== 00:07:05.996 Reservations: Not Supported 00:07:05.996 ===================================================== 00:07:05.996 NVMe Controller at PCI bus 0, device 17, function 0 00:07:05.996 ===================================================== 00:07:05.996 Reservations: Not Supported 00:07:05.996 ===================================================== 00:07:05.996 NVMe Controller at PCI bus 0, device 18, function 0 00:07:05.996 ===================================================== 00:07:05.996 Reservations: Not Supported 00:07:05.996 Reservation test passed 00:07:05.996 00:07:05.996 real 0m0.166s 00:07:05.996 user 0m0.053s 00:07:05.996 sys 0m0.081s 00:07:05.996 10:38:26 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.996 10:38:26 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:05.996 ************************************ 00:07:05.996 END TEST nvme_reserve 00:07:05.996 ************************************ 00:07:05.996 10:38:26 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:05.996 10:38:26 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.996 10:38:26 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.996 10:38:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:05.996 ************************************ 00:07:05.996 START TEST nvme_err_injection 00:07:05.996 ************************************ 00:07:05.996 10:38:26 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:05.996 NVMe Error Injection test 00:07:05.996 Attached to 0000:00:13.0 00:07:05.996 Attached to 0000:00:10.0 00:07:05.996 Attached to 0000:00:11.0 00:07:05.996 Attached to 0000:00:12.0 00:07:05.996 0000:00:13.0: get features failed as expected 00:07:05.996 0000:00:10.0: get features failed as expected 00:07:05.996 0000:00:11.0: get features failed as expected 00:07:05.996 0000:00:12.0: get features failed as expected 00:07:05.996 0000:00:13.0: get features successfully as expected 00:07:05.996 0000:00:10.0: get features successfully as expected 00:07:05.996 0000:00:11.0: get features successfully as expected 00:07:05.996 0000:00:12.0: get features successfully as expected 00:07:05.996 0000:00:13.0: read failed as expected 00:07:05.996 0000:00:10.0: read failed as expected 00:07:05.996 0000:00:11.0: read failed as expected 00:07:05.996 0000:00:12.0: read failed as expected 00:07:05.996 0000:00:13.0: read successfully as expected 00:07:05.996 0000:00:10.0: read successfully as expected 00:07:05.996 0000:00:11.0: read successfully as expected 00:07:05.996 0000:00:12.0: read successfully as expected 00:07:05.996 Cleaning up... 00:07:05.996 00:07:05.996 real 0m0.188s 00:07:05.996 user 0m0.060s 00:07:05.997 sys 0m0.084s 00:07:05.997 10:38:26 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.997 ************************************ 00:07:05.997 END TEST nvme_err_injection 00:07:05.997 ************************************ 00:07:05.997 10:38:26 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:05.997 10:38:26 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:05.997 10:38:26 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:07:05.997 10:38:26 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.997 10:38:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:05.997 ************************************ 00:07:05.997 START TEST nvme_overhead 00:07:05.997 ************************************ 00:07:05.997 10:38:26 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:07.369 Initializing NVMe Controllers 00:07:07.369 Attached to 0000:00:13.0 00:07:07.369 Attached to 0000:00:10.0 00:07:07.369 Attached to 0000:00:11.0 00:07:07.369 Attached to 0000:00:12.0 00:07:07.369 Initialization complete. Launching workers. 00:07:07.369 submit (in ns) avg, min, max = 11261.3, 10028.5, 371225.4 00:07:07.369 complete (in ns) avg, min, max = 7465.5, 7151.5, 63495.4 00:07:07.369 00:07:07.369 Submit histogram 00:07:07.369 ================ 00:07:07.369 Range in us Cumulative Count 00:07:07.369 9.994 - 10.043: 0.0064% ( 1) 00:07:07.369 10.092 - 10.142: 0.0128% ( 1) 00:07:07.369 10.634 - 10.683: 0.0193% ( 1) 00:07:07.369 10.683 - 10.732: 0.0257% ( 1) 00:07:07.369 10.732 - 10.782: 0.1798% ( 24) 00:07:07.369 10.782 - 10.831: 1.1944% ( 158) 00:07:07.369 10.831 - 10.880: 5.7086% ( 703) 00:07:07.369 10.880 - 10.929: 16.7983% ( 1727) 00:07:07.369 10.929 - 10.978: 33.9434% ( 2670) 00:07:07.369 10.978 - 11.028: 52.8029% ( 2937) 00:07:07.369 11.028 - 11.077: 68.6701% ( 2471) 00:07:07.369 11.077 - 11.126: 78.4691% ( 1526) 00:07:07.369 11.126 - 11.175: 83.8438% ( 837) 00:07:07.369 11.175 - 11.225: 86.5986% ( 429) 00:07:07.369 11.225 - 11.274: 88.1012% ( 234) 00:07:07.369 11.274 - 11.323: 89.0516% ( 148) 00:07:07.369 11.323 - 11.372: 89.6937% ( 100) 00:07:07.369 11.372 - 11.422: 90.0726% ( 59) 00:07:07.369 11.422 - 11.471: 90.4257% ( 55) 00:07:07.369 11.471 - 11.520: 90.8046% ( 59) 00:07:07.369 11.520 - 11.569: 91.2605% ( 71) 00:07:07.369 11.569 - 11.618: 91.8898% ( 98) 00:07:07.369 11.618 - 11.668: 92.4549% ( 88) 00:07:07.369 11.668 - 11.717: 92.9750% ( 81) 00:07:07.369 11.717 - 11.766: 93.4181% ( 69) 00:07:07.370 11.766 - 11.815: 93.8162% ( 62) 00:07:07.370 11.815 - 11.865: 94.3107% ( 77) 00:07:07.370 11.865 - 11.914: 94.6895% ( 59) 00:07:07.370 11.914 - 11.963: 94.9785% ( 45) 00:07:07.370 11.963 - 12.012: 95.3188% ( 53) 00:07:07.370 12.012 - 12.062: 95.6656% ( 54) 00:07:07.370 12.062 - 12.111: 95.8325% ( 26) 00:07:07.370 12.111 - 12.160: 96.0123% ( 28) 00:07:07.370 12.160 - 12.209: 96.1664% ( 24) 00:07:07.370 12.209 - 12.258: 96.2499% ( 13) 00:07:07.370 12.258 - 12.308: 96.3655% ( 18) 00:07:07.370 12.308 - 12.357: 96.4618% ( 15) 00:07:07.370 12.357 - 12.406: 96.5325% ( 11) 00:07:07.370 12.406 - 12.455: 96.5453% ( 2) 00:07:07.370 12.455 - 12.505: 96.5646% ( 3) 00:07:07.370 12.505 - 12.554: 96.5774% ( 2) 00:07:07.370 12.603 - 12.702: 96.6095% ( 5) 00:07:07.370 12.702 - 12.800: 96.6545% ( 7) 00:07:07.370 12.800 - 12.898: 96.7765% ( 19) 00:07:07.370 12.898 - 12.997: 97.0462% ( 42) 00:07:07.370 12.997 - 13.095: 97.2709% ( 35) 00:07:07.370 13.095 - 13.194: 97.5342% ( 41) 00:07:07.370 13.194 - 13.292: 97.6626% ( 20) 00:07:07.370 13.292 - 13.391: 97.7525% ( 14) 00:07:07.370 13.391 - 13.489: 97.8232% ( 11) 00:07:07.370 13.489 - 13.588: 97.8553% ( 5) 00:07:07.370 13.588 - 13.686: 97.8938% ( 6) 00:07:07.370 13.686 - 13.785: 97.9323% ( 6) 00:07:07.370 13.785 - 13.883: 97.9387% ( 1) 00:07:07.370 13.883 - 13.982: 97.9580% ( 3) 00:07:07.370 13.982 - 14.080: 97.9644% ( 1) 00:07:07.370 14.080 - 14.178: 97.9965% ( 5) 00:07:07.370 14.178 - 14.277: 98.0415% ( 7) 00:07:07.370 14.277 - 14.375: 98.0672% ( 4) 00:07:07.370 14.375 - 14.474: 98.0736% ( 1) 00:07:07.370 14.474 - 14.572: 98.1057% ( 5) 00:07:07.370 14.572 - 14.671: 98.1378% ( 5) 00:07:07.370 14.671 - 14.769: 98.1892% ( 8) 00:07:07.370 14.769 - 14.868: 98.2084% ( 3) 00:07:07.370 14.868 - 14.966: 98.2470% ( 6) 00:07:07.370 14.966 - 15.065: 98.2855% ( 6) 00:07:07.370 15.065 - 15.163: 98.2983% ( 2) 00:07:07.370 15.163 - 15.262: 98.3369% ( 6) 00:07:07.370 15.262 - 15.360: 98.3561% ( 3) 00:07:07.370 15.360 - 15.458: 98.3690% ( 2) 00:07:07.370 15.458 - 15.557: 98.3882% ( 3) 00:07:07.370 15.557 - 15.655: 98.4332% ( 7) 00:07:07.370 15.655 - 15.754: 98.4524% ( 3) 00:07:07.370 15.754 - 15.852: 98.4589% ( 1) 00:07:07.370 15.852 - 15.951: 98.4781% ( 3) 00:07:07.370 15.951 - 16.049: 98.4846% ( 1) 00:07:07.370 16.049 - 16.148: 98.4910% ( 1) 00:07:07.370 16.148 - 16.246: 98.5102% ( 3) 00:07:07.370 16.246 - 16.345: 98.5295% ( 3) 00:07:07.370 16.345 - 16.443: 98.5745% ( 7) 00:07:07.370 16.443 - 16.542: 98.6708% ( 15) 00:07:07.370 16.542 - 16.640: 98.8249% ( 24) 00:07:07.370 16.640 - 16.738: 98.9533% ( 20) 00:07:07.370 16.738 - 16.837: 99.0689% ( 18) 00:07:07.370 16.837 - 16.935: 99.2038% ( 21) 00:07:07.370 16.935 - 17.034: 99.2744% ( 11) 00:07:07.370 17.034 - 17.132: 99.3707% ( 15) 00:07:07.370 17.132 - 17.231: 99.4413% ( 11) 00:07:07.370 17.231 - 17.329: 99.4991% ( 9) 00:07:07.370 17.329 - 17.428: 99.5377% ( 6) 00:07:07.370 17.428 - 17.526: 99.5762% ( 6) 00:07:07.370 17.526 - 17.625: 99.6083% ( 5) 00:07:07.370 17.723 - 17.822: 99.6532% ( 7) 00:07:07.370 17.822 - 17.920: 99.6725% ( 3) 00:07:07.370 17.920 - 18.018: 99.6789% ( 1) 00:07:07.370 18.018 - 18.117: 99.6854% ( 1) 00:07:07.370 18.117 - 18.215: 99.6918% ( 1) 00:07:07.370 18.215 - 18.314: 99.6982% ( 1) 00:07:07.370 18.314 - 18.412: 99.7046% ( 1) 00:07:07.370 18.412 - 18.511: 99.7110% ( 1) 00:07:07.370 18.511 - 18.609: 99.7239% ( 2) 00:07:07.370 18.708 - 18.806: 99.7303% ( 1) 00:07:07.370 18.905 - 19.003: 99.7496% ( 3) 00:07:07.370 19.003 - 19.102: 99.7688% ( 3) 00:07:07.370 19.102 - 19.200: 99.7817% ( 2) 00:07:07.370 19.298 - 19.397: 99.7945% ( 2) 00:07:07.370 19.495 - 19.594: 99.8009% ( 1) 00:07:07.370 19.692 - 19.791: 99.8202% ( 3) 00:07:07.370 19.988 - 20.086: 99.8266% ( 1) 00:07:07.370 20.283 - 20.382: 99.8330% ( 1) 00:07:07.370 20.382 - 20.480: 99.8395% ( 1) 00:07:07.370 20.578 - 20.677: 99.8459% ( 1) 00:07:07.370 21.169 - 21.268: 99.8652% ( 3) 00:07:07.370 21.662 - 21.760: 99.8716% ( 1) 00:07:07.370 21.760 - 21.858: 99.8780% ( 1) 00:07:07.370 22.449 - 22.548: 99.8844% ( 1) 00:07:07.370 22.843 - 22.942: 99.8908% ( 1) 00:07:07.370 25.600 - 25.797: 99.8973% ( 1) 00:07:07.370 25.994 - 26.191: 99.9037% ( 1) 00:07:07.370 27.766 - 27.963: 99.9101% ( 1) 00:07:07.370 30.523 - 30.720: 99.9165% ( 1) 00:07:07.370 30.720 - 30.917: 99.9229% ( 1) 00:07:07.370 31.508 - 31.705: 99.9294% ( 1) 00:07:07.370 32.295 - 32.492: 99.9358% ( 1) 00:07:07.370 39.188 - 39.385: 99.9422% ( 1) 00:07:07.370 46.080 - 46.277: 99.9486% ( 1) 00:07:07.370 47.065 - 47.262: 99.9551% ( 1) 00:07:07.370 50.018 - 50.215: 99.9615% ( 1) 00:07:07.370 52.382 - 52.775: 99.9679% ( 1) 00:07:07.370 56.714 - 57.108: 99.9743% ( 1) 00:07:07.370 57.108 - 57.502: 99.9807% ( 1) 00:07:07.370 60.652 - 61.046: 99.9872% ( 1) 00:07:07.370 69.317 - 69.711: 99.9936% ( 1) 00:07:07.370 370.215 - 371.791: 100.0000% ( 1) 00:07:07.370 00:07:07.370 Complete histogram 00:07:07.370 ================== 00:07:07.370 Range in us Cumulative Count 00:07:07.370 7.138 - 7.188: 0.2761% ( 43) 00:07:07.370 7.188 - 7.237: 4.6105% ( 675) 00:07:07.370 7.237 - 7.286: 20.9016% ( 2537) 00:07:07.370 7.286 - 7.335: 46.2788% ( 3952) 00:07:07.370 7.335 - 7.385: 68.8050% ( 3508) 00:07:07.370 7.385 - 7.434: 82.4953% ( 2132) 00:07:07.370 7.434 - 7.483: 88.8589% ( 991) 00:07:07.370 7.483 - 7.532: 92.0503% ( 497) 00:07:07.370 7.532 - 7.582: 94.3299% ( 355) 00:07:07.370 7.582 - 7.631: 95.9160% ( 247) 00:07:07.370 7.631 - 7.680: 97.1168% ( 187) 00:07:07.370 7.680 - 7.729: 97.8232% ( 110) 00:07:07.370 7.729 - 7.778: 98.1314% ( 48) 00:07:07.370 7.778 - 7.828: 98.2855% ( 24) 00:07:07.370 7.828 - 7.877: 98.3433% ( 9) 00:07:07.370 7.877 - 7.926: 98.4139% ( 11) 00:07:07.370 7.926 - 7.975: 98.4203% ( 1) 00:07:07.370 8.025 - 8.074: 98.4332% ( 2) 00:07:07.370 8.074 - 8.123: 98.4460% ( 2) 00:07:07.370 8.123 - 8.172: 98.4589% ( 2) 00:07:07.370 8.172 - 8.222: 98.4781% ( 3) 00:07:07.370 8.271 - 8.320: 98.4910% ( 2) 00:07:07.370 8.418 - 8.468: 98.4974% ( 1) 00:07:07.370 8.468 - 8.517: 98.5038% ( 1) 00:07:07.370 8.517 - 8.566: 98.5102% ( 1) 00:07:07.370 8.566 - 8.615: 98.5167% ( 1) 00:07:07.370 9.009 - 9.058: 98.5231% ( 1) 00:07:07.370 9.058 - 9.108: 98.5295% ( 1) 00:07:07.370 9.108 - 9.157: 98.5359% ( 1) 00:07:07.370 9.157 - 9.206: 98.5423% ( 1) 00:07:07.370 9.206 - 9.255: 98.5488% ( 1) 00:07:07.370 9.354 - 9.403: 98.5552% ( 1) 00:07:07.370 9.403 - 9.452: 98.5616% ( 1) 00:07:07.370 9.502 - 9.551: 98.5680% ( 1) 00:07:07.370 10.092 - 10.142: 98.5745% ( 1) 00:07:07.370 10.289 - 10.338: 98.5809% ( 1) 00:07:07.370 10.338 - 10.388: 98.5937% ( 2) 00:07:07.370 10.634 - 10.683: 98.6066% ( 2) 00:07:07.370 10.880 - 10.929: 98.6130% ( 1) 00:07:07.370 10.978 - 11.028: 98.6194% ( 1) 00:07:07.370 11.028 - 11.077: 98.6258% ( 1) 00:07:07.370 11.077 - 11.126: 98.6322% ( 1) 00:07:07.370 11.126 - 11.175: 98.6451% ( 2) 00:07:07.370 11.175 - 11.225: 98.6644% ( 3) 00:07:07.370 11.717 - 11.766: 98.6708% ( 1) 00:07:07.370 11.766 - 11.815: 98.6772% ( 1) 00:07:07.370 11.914 - 11.963: 98.6836% ( 1) 00:07:07.370 12.308 - 12.357: 98.6900% ( 1) 00:07:07.370 12.357 - 12.406: 98.6965% ( 1) 00:07:07.370 12.455 - 12.505: 98.7029% ( 1) 00:07:07.370 12.554 - 12.603: 98.7093% ( 1) 00:07:07.370 12.702 - 12.800: 98.7157% ( 1) 00:07:07.370 12.800 - 12.898: 98.7735% ( 9) 00:07:07.370 12.898 - 12.997: 98.8313% ( 9) 00:07:07.370 12.997 - 13.095: 98.9533% ( 19) 00:07:07.370 13.095 - 13.194: 99.0496% ( 15) 00:07:07.370 13.194 - 13.292: 99.1395% ( 14) 00:07:07.370 13.292 - 13.391: 99.2038% ( 10) 00:07:07.370 13.391 - 13.489: 99.2872% ( 13) 00:07:07.370 13.489 - 13.588: 99.3900% ( 16) 00:07:07.370 13.588 - 13.686: 99.4606% ( 11) 00:07:07.370 13.686 - 13.785: 99.5377% ( 12) 00:07:07.370 13.785 - 13.883: 99.5955% ( 9) 00:07:07.370 13.883 - 13.982: 99.6404% ( 7) 00:07:07.370 13.982 - 14.080: 99.6725% ( 5) 00:07:07.370 14.080 - 14.178: 99.6789% ( 1) 00:07:07.370 14.178 - 14.277: 99.6982% ( 3) 00:07:07.370 14.277 - 14.375: 99.7303% ( 5) 00:07:07.370 14.375 - 14.474: 99.7431% ( 2) 00:07:07.370 14.474 - 14.572: 99.7496% ( 1) 00:07:07.370 14.769 - 14.868: 99.7688% ( 3) 00:07:07.370 14.966 - 15.065: 99.7753% ( 1) 00:07:07.370 15.065 - 15.163: 99.7817% ( 1) 00:07:07.370 15.163 - 15.262: 99.7881% ( 1) 00:07:07.370 15.262 - 15.360: 99.7945% ( 1) 00:07:07.370 15.754 - 15.852: 99.8009% ( 1) 00:07:07.370 16.148 - 16.246: 99.8074% ( 1) 00:07:07.370 16.738 - 16.837: 99.8266% ( 3) 00:07:07.370 16.837 - 16.935: 99.8330% ( 1) 00:07:07.370 16.935 - 17.034: 99.8395% ( 1) 00:07:07.370 17.034 - 17.132: 99.8459% ( 1) 00:07:07.370 17.132 - 17.231: 99.8523% ( 1) 00:07:07.370 17.231 - 17.329: 99.8587% ( 1) 00:07:07.371 17.329 - 17.428: 99.8652% ( 1) 00:07:07.371 17.920 - 18.018: 99.8716% ( 1) 00:07:07.371 18.117 - 18.215: 99.8780% ( 1) 00:07:07.371 19.397 - 19.495: 99.8844% ( 1) 00:07:07.371 20.677 - 20.775: 99.8908% ( 1) 00:07:07.371 21.268 - 21.366: 99.8973% ( 1) 00:07:07.371 21.563 - 21.662: 99.9037% ( 1) 00:07:07.371 21.662 - 21.760: 99.9101% ( 1) 00:07:07.371 21.858 - 21.957: 99.9165% ( 1) 00:07:07.371 21.957 - 22.055: 99.9229% ( 1) 00:07:07.371 22.252 - 22.351: 99.9294% ( 1) 00:07:07.371 22.449 - 22.548: 99.9422% ( 2) 00:07:07.371 22.942 - 23.040: 99.9486% ( 1) 00:07:07.371 23.237 - 23.335: 99.9551% ( 1) 00:07:07.371 24.418 - 24.517: 99.9615% ( 1) 00:07:07.371 27.175 - 27.372: 99.9679% ( 1) 00:07:07.371 27.963 - 28.160: 99.9743% ( 1) 00:07:07.371 29.538 - 29.735: 99.9807% ( 1) 00:07:07.371 36.628 - 36.825: 99.9872% ( 1) 00:07:07.371 47.458 - 47.655: 99.9936% ( 1) 00:07:07.371 63.409 - 63.803: 100.0000% ( 1) 00:07:07.371 00:07:07.371 00:07:07.371 real 0m1.190s 00:07:07.371 user 0m1.054s 00:07:07.371 sys 0m0.088s 00:07:07.371 10:38:27 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.371 10:38:27 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:07.371 ************************************ 00:07:07.371 END TEST nvme_overhead 00:07:07.371 ************************************ 00:07:07.371 10:38:27 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:07.371 10:38:27 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:07.371 10:38:27 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.371 10:38:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:07.371 ************************************ 00:07:07.371 START TEST nvme_arbitration 00:07:07.371 ************************************ 00:07:07.371 10:38:27 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:10.674 Initializing NVMe Controllers 00:07:10.674 Attached to 0000:00:13.0 00:07:10.674 Attached to 0000:00:10.0 00:07:10.674 Attached to 0000:00:11.0 00:07:10.674 Attached to 0000:00:12.0 00:07:10.674 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:07:10.674 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:07:10.674 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:07:10.674 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:10.674 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:10.674 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:10.674 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:10.674 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:10.674 Initialization complete. Launching workers. 00:07:10.674 Starting thread on core 1 with urgent priority queue 00:07:10.674 Starting thread on core 2 with urgent priority queue 00:07:10.674 Starting thread on core 3 with urgent priority queue 00:07:10.674 Starting thread on core 0 with urgent priority queue 00:07:10.674 QEMU NVMe Ctrl (12343 ) core 0: 4053.33 IO/s 24.67 secs/100000 ios 00:07:10.674 QEMU NVMe Ctrl (12342 ) core 0: 4053.33 IO/s 24.67 secs/100000 ios 00:07:10.674 QEMU NVMe Ctrl (12340 ) core 1: 4138.67 IO/s 24.16 secs/100000 ios 00:07:10.674 QEMU NVMe Ctrl (12342 ) core 1: 4138.67 IO/s 24.16 secs/100000 ios 00:07:10.674 QEMU NVMe Ctrl (12341 ) core 2: 3989.33 IO/s 25.07 secs/100000 ios 00:07:10.674 QEMU NVMe Ctrl (12342 ) core 3: 3754.67 IO/s 26.63 secs/100000 ios 00:07:10.674 ======================================================== 00:07:10.674 00:07:10.674 00:07:10.674 real 0m3.207s 00:07:10.674 user 0m9.010s 00:07:10.674 sys 0m0.093s 00:07:10.674 10:38:30 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.674 ************************************ 00:07:10.674 END TEST nvme_arbitration 00:07:10.674 ************************************ 00:07:10.674 10:38:30 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:10.674 10:38:31 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:10.674 10:38:31 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:10.674 10:38:31 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.674 10:38:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:10.674 ************************************ 00:07:10.674 START TEST nvme_single_aen 00:07:10.674 ************************************ 00:07:10.674 10:38:31 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:10.674 Asynchronous Event Request test 00:07:10.674 Attached to 0000:00:13.0 00:07:10.674 Attached to 0000:00:10.0 00:07:10.674 Attached to 0000:00:11.0 00:07:10.674 Attached to 0000:00:12.0 00:07:10.674 Reset controller to setup AER completions for this process 00:07:10.674 Registering asynchronous event callbacks... 00:07:10.674 Getting orig temperature thresholds of all controllers 00:07:10.674 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:10.674 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:10.674 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:10.674 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:10.674 Setting all controllers temperature threshold low to trigger AER 00:07:10.674 Waiting for all controllers temperature threshold to be set lower 00:07:10.674 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:10.674 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:10.674 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:10.675 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:10.675 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:10.675 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:10.675 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:10.675 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:10.675 Waiting for all controllers to trigger AER and reset threshold 00:07:10.675 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:10.675 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:10.675 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:10.675 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:10.675 Cleaning up... 00:07:10.675 ************************************ 00:07:10.675 END TEST nvme_single_aen 00:07:10.675 ************************************ 00:07:10.675 00:07:10.675 real 0m0.186s 00:07:10.675 user 0m0.065s 00:07:10.675 sys 0m0.077s 00:07:10.675 10:38:31 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.675 10:38:31 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:10.933 10:38:31 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:10.933 10:38:31 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.933 10:38:31 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.933 10:38:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:10.933 ************************************ 00:07:10.933 START TEST nvme_doorbell_aers 00:07:10.933 ************************************ 00:07:10.933 10:38:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:07:10.933 10:38:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:10.933 10:38:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:10.933 10:38:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:10.933 10:38:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:10.933 10:38:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:10.933 10:38:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:07:10.933 10:38:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:10.933 10:38:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:10.933 10:38:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:10.933 10:38:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:07:10.933 10:38:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:10.933 10:38:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:10.933 10:38:31 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:11.191 [2024-10-08 10:38:31.519581] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76398) is not found. Dropping the request. 00:07:21.163 Executing: test_write_invalid_db 00:07:21.163 Waiting for AER completion... 00:07:21.163 Failure: test_write_invalid_db 00:07:21.163 00:07:21.163 Executing: test_invalid_db_write_overflow_sq 00:07:21.163 Waiting for AER completion... 00:07:21.163 Failure: test_invalid_db_write_overflow_sq 00:07:21.163 00:07:21.163 Executing: test_invalid_db_write_overflow_cq 00:07:21.163 Waiting for AER completion... 00:07:21.163 Failure: test_invalid_db_write_overflow_cq 00:07:21.163 00:07:21.163 10:38:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:21.163 10:38:41 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:07:21.163 [2024-10-08 10:38:41.521081] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76398) is not found. Dropping the request. 00:07:31.131 Executing: test_write_invalid_db 00:07:31.131 Waiting for AER completion... 00:07:31.131 Failure: test_write_invalid_db 00:07:31.131 00:07:31.131 Executing: test_invalid_db_write_overflow_sq 00:07:31.131 Waiting for AER completion... 00:07:31.131 Failure: test_invalid_db_write_overflow_sq 00:07:31.131 00:07:31.131 Executing: test_invalid_db_write_overflow_cq 00:07:31.131 Waiting for AER completion... 00:07:31.131 Failure: test_invalid_db_write_overflow_cq 00:07:31.131 00:07:31.131 10:38:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:31.131 10:38:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:07:31.131 [2024-10-08 10:38:51.579330] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76398) is not found. Dropping the request. 00:07:41.107 Executing: test_write_invalid_db 00:07:41.107 Waiting for AER completion... 00:07:41.107 Failure: test_write_invalid_db 00:07:41.107 00:07:41.107 Executing: test_invalid_db_write_overflow_sq 00:07:41.107 Waiting for AER completion... 00:07:41.107 Failure: test_invalid_db_write_overflow_sq 00:07:41.107 00:07:41.107 Executing: test_invalid_db_write_overflow_cq 00:07:41.107 Waiting for AER completion... 00:07:41.107 Failure: test_invalid_db_write_overflow_cq 00:07:41.107 00:07:41.107 10:39:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:41.107 10:39:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:07:41.107 [2024-10-08 10:39:01.594054] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76398) is not found. Dropping the request. 00:07:51.073 Executing: test_write_invalid_db 00:07:51.073 Waiting for AER completion... 00:07:51.073 Failure: test_write_invalid_db 00:07:51.073 00:07:51.073 Executing: test_invalid_db_write_overflow_sq 00:07:51.073 Waiting for AER completion... 00:07:51.073 Failure: test_invalid_db_write_overflow_sq 00:07:51.073 00:07:51.073 Executing: test_invalid_db_write_overflow_cq 00:07:51.073 Waiting for AER completion... 00:07:51.073 Failure: test_invalid_db_write_overflow_cq 00:07:51.073 00:07:51.073 00:07:51.073 real 0m40.182s 00:07:51.073 user 0m33.997s 00:07:51.073 sys 0m5.854s 00:07:51.073 ************************************ 00:07:51.073 END TEST nvme_doorbell_aers 00:07:51.073 ************************************ 00:07:51.073 10:39:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.073 10:39:11 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:07:51.073 10:39:11 nvme -- nvme/nvme.sh@97 -- # uname 00:07:51.073 10:39:11 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:07:51.073 10:39:11 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:07:51.073 10:39:11 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:51.073 10:39:11 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.073 10:39:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:51.073 ************************************ 00:07:51.073 START TEST nvme_multi_aen 00:07:51.073 ************************************ 00:07:51.073 10:39:11 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:07:51.331 [2024-10-08 10:39:11.656934] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76398) is not found. Dropping the request. 00:07:51.331 [2024-10-08 10:39:11.656986] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76398) is not found. Dropping the request. 00:07:51.331 [2024-10-08 10:39:11.657000] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76398) is not found. Dropping the request. 00:07:51.331 [2024-10-08 10:39:11.658229] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76398) is not found. Dropping the request. 00:07:51.331 [2024-10-08 10:39:11.658256] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76398) is not found. Dropping the request. 00:07:51.331 [2024-10-08 10:39:11.658265] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76398) is not found. Dropping the request. 00:07:51.331 [2024-10-08 10:39:11.659251] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76398) is not found. Dropping the request. 00:07:51.331 [2024-10-08 10:39:11.659275] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76398) is not found. Dropping the request. 00:07:51.331 [2024-10-08 10:39:11.659285] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76398) is not found. Dropping the request. 00:07:51.331 [2024-10-08 10:39:11.660201] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76398) is not found. Dropping the request. 00:07:51.331 [2024-10-08 10:39:11.660224] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76398) is not found. Dropping the request. 00:07:51.331 [2024-10-08 10:39:11.660233] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76398) is not found. Dropping the request. 00:07:51.331 Child process pid: 76919 00:07:51.331 [Child] Asynchronous Event Request test 00:07:51.331 [Child] Attached to 0000:00:13.0 00:07:51.331 [Child] Attached to 0000:00:10.0 00:07:51.331 [Child] Attached to 0000:00:11.0 00:07:51.331 [Child] Attached to 0000:00:12.0 00:07:51.331 [Child] Registering asynchronous event callbacks... 00:07:51.331 [Child] Getting orig temperature thresholds of all controllers 00:07:51.331 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:51.331 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:51.331 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:51.331 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:51.331 [Child] Waiting for all controllers to trigger AER and reset threshold 00:07:51.331 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:51.331 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:51.331 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:51.331 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:51.331 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:51.331 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:51.331 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:51.331 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:51.331 [Child] Cleaning up... 00:07:51.331 Asynchronous Event Request test 00:07:51.331 Attached to 0000:00:13.0 00:07:51.331 Attached to 0000:00:10.0 00:07:51.331 Attached to 0000:00:11.0 00:07:51.331 Attached to 0000:00:12.0 00:07:51.331 Reset controller to setup AER completions for this process 00:07:51.331 Registering asynchronous event callbacks... 00:07:51.331 Getting orig temperature thresholds of all controllers 00:07:51.331 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:51.331 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:51.331 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:51.331 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:51.331 Setting all controllers temperature threshold low to trigger AER 00:07:51.331 Waiting for all controllers temperature threshold to be set lower 00:07:51.331 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:51.331 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:51.331 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:51.331 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:51.331 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:51.331 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:51.331 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:51.331 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:51.331 Waiting for all controllers to trigger AER and reset threshold 00:07:51.331 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:51.331 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:51.331 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:51.331 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:51.331 Cleaning up... 00:07:51.331 00:07:51.331 real 0m0.378s 00:07:51.331 user 0m0.111s 00:07:51.331 sys 0m0.166s 00:07:51.331 10:39:11 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.331 ************************************ 00:07:51.331 END TEST nvme_multi_aen 00:07:51.331 ************************************ 00:07:51.331 10:39:11 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:07:51.331 10:39:11 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:07:51.331 10:39:11 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:51.331 10:39:11 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.331 10:39:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:51.590 ************************************ 00:07:51.590 START TEST nvme_startup 00:07:51.590 ************************************ 00:07:51.590 10:39:11 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:07:51.590 Initializing NVMe Controllers 00:07:51.590 Attached to 0000:00:13.0 00:07:51.590 Attached to 0000:00:10.0 00:07:51.590 Attached to 0000:00:11.0 00:07:51.590 Attached to 0000:00:12.0 00:07:51.590 Initialization complete. 00:07:51.590 Time used:120802.281 (us). 00:07:51.590 00:07:51.590 real 0m0.171s 00:07:51.590 user 0m0.041s 00:07:51.590 sys 0m0.087s 00:07:51.590 10:39:12 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.590 10:39:12 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:07:51.590 ************************************ 00:07:51.590 END TEST nvme_startup 00:07:51.590 ************************************ 00:07:51.590 10:39:12 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:07:51.590 10:39:12 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:51.590 10:39:12 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.590 10:39:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:51.590 ************************************ 00:07:51.590 START TEST nvme_multi_secondary 00:07:51.590 ************************************ 00:07:51.590 10:39:12 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:07:51.590 10:39:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=76969 00:07:51.590 10:39:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:07:51.590 10:39:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=76970 00:07:51.590 10:39:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:07:51.590 10:39:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:07:54.866 Initializing NVMe Controllers 00:07:54.866 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:54.866 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:54.866 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:54.866 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:54.867 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:07:54.867 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:07:54.867 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:07:54.867 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:07:54.867 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:07:54.867 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:07:54.867 Initialization complete. Launching workers. 00:07:54.867 ======================================================== 00:07:54.867 Latency(us) 00:07:54.867 Device Information : IOPS MiB/s Average min max 00:07:54.867 PCIE (0000:00:13.0) NSID 1 from core 2: 3343.19 13.06 4785.51 847.85 12734.90 00:07:54.867 PCIE (0000:00:10.0) NSID 1 from core 2: 3343.19 13.06 4783.85 827.05 13251.22 00:07:54.867 PCIE (0000:00:11.0) NSID 1 from core 2: 3343.19 13.06 4785.85 807.82 12761.20 00:07:54.867 PCIE (0000:00:12.0) NSID 1 from core 2: 3343.19 13.06 4785.12 849.24 12357.50 00:07:54.867 PCIE (0000:00:12.0) NSID 2 from core 2: 3343.19 13.06 4785.97 840.90 12699.95 00:07:54.867 PCIE (0000:00:12.0) NSID 3 from core 2: 3343.19 13.06 4786.53 853.55 12430.42 00:07:54.867 ======================================================== 00:07:54.867 Total : 20059.13 78.36 4785.47 807.82 13251.22 00:07:54.867 00:07:54.867 10:39:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 76969 00:07:55.125 Initializing NVMe Controllers 00:07:55.125 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:55.125 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:55.125 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:55.125 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:55.125 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:07:55.125 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:07:55.125 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:07:55.125 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:07:55.125 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:07:55.125 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:07:55.125 Initialization complete. Launching workers. 00:07:55.125 ======================================================== 00:07:55.125 Latency(us) 00:07:55.125 Device Information : IOPS MiB/s Average min max 00:07:55.125 PCIE (0000:00:13.0) NSID 1 from core 1: 7871.74 30.75 2032.15 981.68 5379.24 00:07:55.125 PCIE (0000:00:10.0) NSID 1 from core 1: 7871.74 30.75 2031.25 883.13 5573.16 00:07:55.125 PCIE (0000:00:11.0) NSID 1 from core 1: 7871.74 30.75 2032.22 969.37 6451.42 00:07:55.125 PCIE (0000:00:12.0) NSID 1 from core 1: 7871.74 30.75 2032.18 894.33 6402.60 00:07:55.125 PCIE (0000:00:12.0) NSID 2 from core 1: 7871.74 30.75 2032.17 966.09 5843.30 00:07:55.125 PCIE (0000:00:12.0) NSID 3 from core 1: 7871.74 30.75 2032.13 915.85 5406.78 00:07:55.125 ======================================================== 00:07:55.125 Total : 47230.43 184.49 2032.01 883.13 6451.42 00:07:55.125 00:07:57.024 Initializing NVMe Controllers 00:07:57.024 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:57.024 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:57.024 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:57.024 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:57.024 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:57.024 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:57.024 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:57.024 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:57.024 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:57.024 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:57.024 Initialization complete. Launching workers. 00:07:57.024 ======================================================== 00:07:57.024 Latency(us) 00:07:57.024 Device Information : IOPS MiB/s Average min max 00:07:57.024 PCIE (0000:00:13.0) NSID 1 from core 0: 11000.72 42.97 1454.08 696.66 5944.24 00:07:57.024 PCIE (0000:00:10.0) NSID 1 from core 0: 11005.12 42.99 1452.63 670.31 5891.70 00:07:57.024 PCIE (0000:00:11.0) NSID 1 from core 0: 11003.32 42.98 1453.72 665.94 5825.13 00:07:57.024 PCIE (0000:00:12.0) NSID 1 from core 0: 11005.12 42.99 1453.47 672.04 6242.32 00:07:57.024 PCIE (0000:00:12.0) NSID 2 from core 0: 11003.92 42.98 1453.63 653.29 5831.56 00:07:57.024 PCIE (0000:00:12.0) NSID 3 from core 0: 11003.92 42.98 1453.63 535.06 5983.75 00:07:57.024 ======================================================== 00:07:57.024 Total : 66022.14 257.90 1453.53 535.06 6242.32 00:07:57.024 00:07:57.024 10:39:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 76970 00:07:57.024 10:39:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=77039 00:07:57.024 10:39:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:07:57.024 10:39:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=77040 00:07:57.024 10:39:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:07:57.024 10:39:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:00.307 Initializing NVMe Controllers 00:08:00.307 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:00.307 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:00.307 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:00.307 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:00.307 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:00.307 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:00.307 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:00.307 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:00.307 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:00.307 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:00.307 Initialization complete. Launching workers. 00:08:00.307 ======================================================== 00:08:00.307 Latency(us) 00:08:00.307 Device Information : IOPS MiB/s Average min max 00:08:00.307 PCIE (0000:00:13.0) NSID 1 from core 1: 8197.90 32.02 1951.32 730.00 6122.24 00:08:00.307 PCIE (0000:00:10.0) NSID 1 from core 1: 8197.90 32.02 1950.46 703.07 6055.63 00:08:00.307 PCIE (0000:00:11.0) NSID 1 from core 1: 8197.90 32.02 1951.45 724.76 5793.75 00:08:00.307 PCIE (0000:00:12.0) NSID 1 from core 1: 8197.90 32.02 1951.44 710.05 5761.15 00:08:00.307 PCIE (0000:00:12.0) NSID 2 from core 1: 8197.90 32.02 1951.46 723.40 6426.08 00:08:00.307 PCIE (0000:00:12.0) NSID 3 from core 1: 8197.90 32.02 1951.45 716.37 6206.20 00:08:00.307 ======================================================== 00:08:00.307 Total : 49187.37 192.14 1951.26 703.07 6426.08 00:08:00.307 00:08:00.307 Initializing NVMe Controllers 00:08:00.307 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:00.307 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:00.307 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:00.307 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:00.307 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:00.307 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:00.307 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:00.307 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:00.307 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:00.307 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:00.307 Initialization complete. Launching workers. 00:08:00.307 ======================================================== 00:08:00.307 Latency(us) 00:08:00.307 Device Information : IOPS MiB/s Average min max 00:08:00.307 PCIE (0000:00:13.0) NSID 1 from core 0: 8195.85 32.02 1951.78 718.21 5064.74 00:08:00.307 PCIE (0000:00:10.0) NSID 1 from core 0: 8195.85 32.02 1950.87 709.33 5345.84 00:08:00.307 PCIE (0000:00:11.0) NSID 1 from core 0: 8195.85 32.02 1951.77 722.90 5281.34 00:08:00.307 PCIE (0000:00:12.0) NSID 1 from core 0: 8195.85 32.02 1951.74 733.13 5833.62 00:08:00.307 PCIE (0000:00:12.0) NSID 2 from core 0: 8195.85 32.02 1951.78 732.77 5790.30 00:08:00.307 PCIE (0000:00:12.0) NSID 3 from core 0: 8195.85 32.02 1951.75 725.61 5434.97 00:08:00.307 ======================================================== 00:08:00.307 Total : 49175.12 192.09 1951.61 709.33 5833.62 00:08:00.307 00:08:02.209 Initializing NVMe Controllers 00:08:02.209 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:02.209 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:02.209 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:02.209 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:02.209 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:02.209 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:02.209 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:02.209 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:02.209 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:02.209 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:02.209 Initialization complete. Launching workers. 00:08:02.209 ======================================================== 00:08:02.209 Latency(us) 00:08:02.209 Device Information : IOPS MiB/s Average min max 00:08:02.209 PCIE (0000:00:13.0) NSID 1 from core 2: 4612.42 18.02 3468.22 734.36 12427.02 00:08:02.209 PCIE (0000:00:10.0) NSID 1 from core 2: 4612.42 18.02 3466.70 744.71 12244.93 00:08:02.209 PCIE (0000:00:11.0) NSID 1 from core 2: 4612.42 18.02 3468.60 756.61 11467.10 00:08:02.209 PCIE (0000:00:12.0) NSID 1 from core 2: 4612.42 18.02 3468.07 761.52 12641.52 00:08:02.209 PCIE (0000:00:12.0) NSID 2 from core 2: 4612.42 18.02 3468.41 738.31 12236.09 00:08:02.209 PCIE (0000:00:12.0) NSID 3 from core 2: 4612.42 18.02 3468.18 738.84 12712.13 00:08:02.209 ======================================================== 00:08:02.209 Total : 27674.55 108.10 3468.03 734.36 12712.13 00:08:02.209 00:08:02.209 10:39:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 77039 00:08:02.209 10:39:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 77040 00:08:02.209 00:08:02.209 real 0m10.582s 00:08:02.209 user 0m18.299s 00:08:02.209 sys 0m0.548s 00:08:02.209 10:39:22 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.209 10:39:22 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:02.209 ************************************ 00:08:02.209 END TEST nvme_multi_secondary 00:08:02.209 ************************************ 00:08:02.209 10:39:22 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:02.209 10:39:22 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:02.209 10:39:22 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/76008 ]] 00:08:02.209 10:39:22 nvme -- common/autotest_common.sh@1090 -- # kill 76008 00:08:02.209 10:39:22 nvme -- common/autotest_common.sh@1091 -- # wait 76008 00:08:02.209 [2024-10-08 10:39:22.735151] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76918) is not found. Dropping the request. 00:08:02.209 [2024-10-08 10:39:22.735257] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76918) is not found. Dropping the request. 00:08:02.209 [2024-10-08 10:39:22.735292] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76918) is not found. Dropping the request. 00:08:02.209 [2024-10-08 10:39:22.735332] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76918) is not found. Dropping the request. 00:08:02.209 [2024-10-08 10:39:22.736154] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76918) is not found. Dropping the request. 00:08:02.209 [2024-10-08 10:39:22.736226] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76918) is not found. Dropping the request. 00:08:02.209 [2024-10-08 10:39:22.736260] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76918) is not found. Dropping the request. 00:08:02.209 [2024-10-08 10:39:22.736286] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76918) is not found. Dropping the request. 00:08:02.209 [2024-10-08 10:39:22.737078] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76918) is not found. Dropping the request. 00:08:02.209 [2024-10-08 10:39:22.737155] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76918) is not found. Dropping the request. 00:08:02.209 [2024-10-08 10:39:22.737191] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76918) is not found. Dropping the request. 00:08:02.209 [2024-10-08 10:39:22.737235] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76918) is not found. Dropping the request. 00:08:02.209 [2024-10-08 10:39:22.738006] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76918) is not found. Dropping the request. 00:08:02.209 [2024-10-08 10:39:22.738075] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76918) is not found. Dropping the request. 00:08:02.209 [2024-10-08 10:39:22.738106] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76918) is not found. Dropping the request. 00:08:02.209 [2024-10-08 10:39:22.738132] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76918) is not found. Dropping the request. 00:08:02.470 10:39:22 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:08:02.470 10:39:22 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:08:02.470 10:39:22 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:02.470 10:39:22 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:02.470 10:39:22 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.470 10:39:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:02.470 ************************************ 00:08:02.470 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:02.470 ************************************ 00:08:02.470 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:02.470 * Looking for test storage... 00:08:02.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:02.470 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:02.470 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lcov --version 00:08:02.470 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:02.470 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:02.470 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:02.470 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:02.470 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:02.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.471 --rc genhtml_branch_coverage=1 00:08:02.471 --rc genhtml_function_coverage=1 00:08:02.471 --rc genhtml_legend=1 00:08:02.471 --rc geninfo_all_blocks=1 00:08:02.471 --rc geninfo_unexecuted_blocks=1 00:08:02.471 00:08:02.471 ' 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:02.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.471 --rc genhtml_branch_coverage=1 00:08:02.471 --rc genhtml_function_coverage=1 00:08:02.471 --rc genhtml_legend=1 00:08:02.471 --rc geninfo_all_blocks=1 00:08:02.471 --rc geninfo_unexecuted_blocks=1 00:08:02.471 00:08:02.471 ' 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:02.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.471 --rc genhtml_branch_coverage=1 00:08:02.471 --rc genhtml_function_coverage=1 00:08:02.471 --rc genhtml_legend=1 00:08:02.471 --rc geninfo_all_blocks=1 00:08:02.471 --rc geninfo_unexecuted_blocks=1 00:08:02.471 00:08:02.471 ' 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:02.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.471 --rc genhtml_branch_coverage=1 00:08:02.471 --rc genhtml_function_coverage=1 00:08:02.471 --rc genhtml_legend=1 00:08:02.471 --rc geninfo_all_blocks=1 00:08:02.471 --rc geninfo_unexecuted_blocks=1 00:08:02.471 00:08:02.471 ' 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:02.471 10:39:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:02.471 10:39:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:02.471 10:39:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:02.471 10:39:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:08:02.471 10:39:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:02.471 10:39:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:02.471 10:39:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=77201 00:08:02.471 10:39:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:02.471 10:39:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 77201 00:08:02.471 10:39:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:02.471 10:39:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 77201 ']' 00:08:02.471 10:39:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.471 10:39:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:02.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.471 10:39:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.471 10:39:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:02.471 10:39:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:02.731 [2024-10-08 10:39:23.121207] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:08:02.731 [2024-10-08 10:39:23.121354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77201 ] 00:08:02.732 [2024-10-08 10:39:23.265927] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:02.732 [2024-10-08 10:39:23.285574] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:02.990 [2024-10-08 10:39:23.322520] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.990 [2024-10-08 10:39:23.322851] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:08:02.990 [2024-10-08 10:39:23.323000] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.990 [2024-10-08 10:39:23.323053] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.558 10:39:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:03.558 10:39:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:08:03.558 10:39:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:03.558 10:39:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.558 10:39:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:03.558 nvme0n1 00:08:03.558 10:39:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.558 10:39:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:03.558 10:39:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_Rb68m.txt 00:08:03.558 10:39:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:03.558 10:39:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.558 10:39:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:03.558 true 00:08:03.558 10:39:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.558 10:39:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:03.558 10:39:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1728383964 00:08:03.558 10:39:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=77224 00:08:03.558 10:39:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:03.558 10:39:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:03.558 10:39:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:06.095 [2024-10-08 10:39:26.050007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:06.095 [2024-10-08 10:39:26.050236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:06.095 [2024-10-08 10:39:26.050258] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:06.095 [2024-10-08 10:39:26.050270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:06.095 [2024-10-08 10:39:26.052071] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.095 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 77224 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 77224 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 77224 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_Rb68m.txt 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_Rb68m.txt 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 77201 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 77201 ']' 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 77201 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77201 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:06.095 killing process with pid 77201 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77201' 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 77201 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 77201 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:06.095 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:06.095 00:08:06.096 real 0m3.564s 00:08:06.096 user 0m12.630s 00:08:06.096 sys 0m0.492s 00:08:06.096 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:06.096 ************************************ 00:08:06.096 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:06.096 10:39:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:06.096 ************************************ 00:08:06.096 10:39:26 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:06.096 10:39:26 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:06.096 10:39:26 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:06.096 10:39:26 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:06.096 10:39:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:06.096 ************************************ 00:08:06.096 START TEST nvme_fio 00:08:06.096 ************************************ 00:08:06.096 10:39:26 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:08:06.096 10:39:26 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:06.096 10:39:26 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:06.096 10:39:26 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:06.096 10:39:26 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:06.096 10:39:26 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:08:06.096 10:39:26 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:06.096 10:39:26 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:06.096 10:39:26 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:06.096 10:39:26 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:06.096 10:39:26 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:06.096 10:39:26 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:06.096 10:39:26 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:06.096 10:39:26 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:06.096 10:39:26 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:06.096 10:39:26 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:06.354 10:39:26 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:06.354 10:39:26 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:06.354 10:39:26 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:06.354 10:39:26 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:06.354 10:39:26 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:06.354 10:39:26 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:06.354 10:39:26 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:06.354 10:39:26 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:06.354 10:39:26 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:06.354 10:39:26 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:08:06.354 10:39:26 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:06.354 10:39:26 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:06.354 10:39:26 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:06.354 10:39:26 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:06.354 10:39:26 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:08:06.615 10:39:26 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:06.615 10:39:26 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:06.615 10:39:26 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:08:06.615 10:39:26 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:06.615 10:39:26 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:06.615 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:06.615 fio-3.35 00:08:06.615 Starting 1 thread 00:08:13.187 00:08:13.187 test: (groupid=0, jobs=1): err= 0: pid=77347: Tue Oct 8 10:39:32 2024 00:08:13.187 read: IOPS=21.9k, BW=85.4MiB/s (89.6MB/s)(171MiB/2001msec) 00:08:13.187 slat (nsec): min=3342, max=70605, avg=5154.28, stdev=2559.73 00:08:13.187 clat (usec): min=272, max=11414, avg=2920.54, stdev=1054.61 00:08:13.187 lat (usec): min=276, max=11478, avg=2925.69, stdev=1055.98 00:08:13.187 clat percentiles (usec): 00:08:13.187 | 1.00th=[ 1450], 5.00th=[ 2057], 10.00th=[ 2147], 20.00th=[ 2311], 00:08:13.187 | 30.00th=[ 2376], 40.00th=[ 2442], 50.00th=[ 2540], 60.00th=[ 2638], 00:08:13.187 | 70.00th=[ 2835], 80.00th=[ 3392], 90.00th=[ 4555], 95.00th=[ 5342], 00:08:13.187 | 99.00th=[ 6521], 99.50th=[ 6849], 99.90th=[ 8717], 99.95th=[ 9372], 00:08:13.187 | 99.99th=[11207] 00:08:13.187 bw ( KiB/s): min=80592, max=92016, per=100.00%, avg=87520.00, stdev=6087.93, samples=3 00:08:13.187 iops : min=20148, max=23004, avg=21880.00, stdev=1521.98, samples=3 00:08:13.187 write: IOPS=21.7k, BW=84.8MiB/s (88.9MB/s)(170MiB/2001msec); 0 zone resets 00:08:13.187 slat (nsec): min=3453, max=72515, avg=5296.10, stdev=2467.88 00:08:13.187 clat (usec): min=215, max=11325, avg=2933.72, stdev=1058.58 00:08:13.187 lat (usec): min=219, max=11340, avg=2939.01, stdev=1059.87 00:08:13.187 clat percentiles (usec): 00:08:13.187 | 1.00th=[ 1450], 5.00th=[ 2073], 10.00th=[ 2180], 20.00th=[ 2311], 00:08:13.187 | 30.00th=[ 2409], 40.00th=[ 2474], 50.00th=[ 2540], 60.00th=[ 2671], 00:08:13.187 | 70.00th=[ 2868], 80.00th=[ 3392], 90.00th=[ 4555], 95.00th=[ 5342], 00:08:13.187 | 99.00th=[ 6521], 99.50th=[ 6980], 99.90th=[ 8848], 99.95th=[ 9503], 00:08:13.187 | 99.99th=[10945] 00:08:13.187 bw ( KiB/s): min=80472, max=91848, per=100.00%, avg=87669.33, stdev=6260.00, samples=3 00:08:13.187 iops : min=20118, max=22962, avg=21917.33, stdev=1565.00, samples=3 00:08:13.187 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.07% 00:08:13.187 lat (msec) : 2=3.87%, 4=82.11%, 10=13.88%, 20=0.04% 00:08:13.187 cpu : usr=99.05%, sys=0.10%, ctx=6, majf=0, minf=624 00:08:13.187 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:13.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:13.187 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:13.187 issued rwts: total=43748,43438,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:13.187 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:13.187 00:08:13.187 Run status group 0 (all jobs): 00:08:13.187 READ: bw=85.4MiB/s (89.6MB/s), 85.4MiB/s-85.4MiB/s (89.6MB/s-89.6MB/s), io=171MiB (179MB), run=2001-2001msec 00:08:13.187 WRITE: bw=84.8MiB/s (88.9MB/s), 84.8MiB/s-84.8MiB/s (88.9MB/s-88.9MB/s), io=170MiB (178MB), run=2001-2001msec 00:08:13.187 ----------------------------------------------------- 00:08:13.187 Suppressions used: 00:08:13.187 count bytes template 00:08:13.187 1 32 /usr/src/fio/parse.c 00:08:13.187 1 8 libtcmalloc_minimal.so 00:08:13.187 ----------------------------------------------------- 00:08:13.187 00:08:13.187 10:39:33 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:13.187 10:39:33 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:13.187 10:39:33 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:13.187 10:39:33 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:13.187 10:39:33 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:13.187 10:39:33 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:13.187 10:39:33 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:13.187 10:39:33 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:13.187 10:39:33 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:13.187 10:39:33 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:13.187 10:39:33 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:13.187 10:39:33 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:13.187 10:39:33 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:13.187 10:39:33 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:08:13.187 10:39:33 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:13.187 10:39:33 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:13.187 10:39:33 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:13.187 10:39:33 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:08:13.187 10:39:33 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:13.187 10:39:33 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:13.187 10:39:33 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:13.187 10:39:33 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:08:13.187 10:39:33 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:13.188 10:39:33 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:13.188 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:13.188 fio-3.35 00:08:13.188 Starting 1 thread 00:08:18.476 00:08:18.476 test: (groupid=0, jobs=1): err= 0: pid=77402: Tue Oct 8 10:39:38 2024 00:08:18.476 read: IOPS=17.1k, BW=66.9MiB/s (70.2MB/s)(134MiB/2002msec) 00:08:18.476 slat (nsec): min=4690, max=80695, avg=6261.88, stdev=3120.57 00:08:18.476 clat (usec): min=1084, max=13815, avg=3596.38, stdev=1172.09 00:08:18.476 lat (usec): min=1089, max=13869, avg=3602.64, stdev=1173.27 00:08:18.476 clat percentiles (usec): 00:08:18.476 | 1.00th=[ 1827], 5.00th=[ 2409], 10.00th=[ 2606], 20.00th=[ 2802], 00:08:18.476 | 30.00th=[ 2933], 40.00th=[ 3064], 50.00th=[ 3195], 60.00th=[ 3392], 00:08:18.476 | 70.00th=[ 3687], 80.00th=[ 4359], 90.00th=[ 5407], 95.00th=[ 6063], 00:08:18.476 | 99.00th=[ 7111], 99.50th=[ 7635], 99.90th=[10159], 99.95th=[11207], 00:08:18.476 | 99.99th=[13698] 00:08:18.476 bw ( KiB/s): min=65968, max=74616, per=100.00%, avg=69357.33, stdev=4617.12, samples=3 00:08:18.476 iops : min=16492, max=18654, avg=17339.33, stdev=1154.28, samples=3 00:08:18.476 write: IOPS=17.2k, BW=67.0MiB/s (70.3MB/s)(134MiB/2002msec); 0 zone resets 00:08:18.476 slat (usec): min=4, max=216, avg= 6.42, stdev= 3.34 00:08:18.476 clat (usec): min=1198, max=17543, avg=3845.54, stdev=1625.99 00:08:18.476 lat (usec): min=1203, max=17549, avg=3851.97, stdev=1626.75 00:08:18.476 clat percentiles (usec): 00:08:18.476 | 1.00th=[ 1926], 5.00th=[ 2507], 10.00th=[ 2704], 20.00th=[ 2868], 00:08:18.476 | 30.00th=[ 2999], 40.00th=[ 3130], 50.00th=[ 3294], 60.00th=[ 3490], 00:08:18.476 | 70.00th=[ 3884], 80.00th=[ 4752], 90.00th=[ 5735], 95.00th=[ 6652], 00:08:18.476 | 99.00th=[10945], 99.50th=[12911], 99.90th=[15926], 99.95th=[16909], 00:08:18.476 | 99.99th=[17433] 00:08:18.476 bw ( KiB/s): min=66288, max=74336, per=100.00%, avg=69288.00, stdev=4397.53, samples=3 00:08:18.476 iops : min=16572, max=18584, avg=17322.00, stdev=1099.38, samples=3 00:08:18.476 lat (msec) : 2=1.47%, 4=72.23%, 10=25.54%, 20=0.76% 00:08:18.476 cpu : usr=98.75%, sys=0.15%, ctx=6, majf=0, minf=625 00:08:18.476 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:18.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:18.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:18.476 issued rwts: total=34299,34344,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:18.476 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:18.476 00:08:18.476 Run status group 0 (all jobs): 00:08:18.476 READ: bw=66.9MiB/s (70.2MB/s), 66.9MiB/s-66.9MiB/s (70.2MB/s-70.2MB/s), io=134MiB (140MB), run=2002-2002msec 00:08:18.476 WRITE: bw=67.0MiB/s (70.3MB/s), 67.0MiB/s-67.0MiB/s (70.3MB/s-70.3MB/s), io=134MiB (141MB), run=2002-2002msec 00:08:18.476 ----------------------------------------------------- 00:08:18.476 Suppressions used: 00:08:18.476 count bytes template 00:08:18.476 1 32 /usr/src/fio/parse.c 00:08:18.476 1 8 libtcmalloc_minimal.so 00:08:18.476 ----------------------------------------------------- 00:08:18.476 00:08:18.476 10:39:38 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:18.476 10:39:38 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:18.476 10:39:38 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:18.476 10:39:38 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:18.476 10:39:38 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:18.476 10:39:38 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:18.738 10:39:39 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:18.738 10:39:39 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:18.738 10:39:39 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:18.738 10:39:39 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:18.738 10:39:39 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:18.738 10:39:39 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:18.738 10:39:39 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:18.738 10:39:39 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:08:18.738 10:39:39 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:18.738 10:39:39 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:18.738 10:39:39 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:18.738 10:39:39 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:08:18.738 10:39:39 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:18.738 10:39:39 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:18.738 10:39:39 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:18.738 10:39:39 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:08:18.738 10:39:39 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:18.738 10:39:39 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:18.738 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:18.738 fio-3.35 00:08:18.738 Starting 1 thread 00:08:25.372 00:08:25.372 test: (groupid=0, jobs=1): err= 0: pid=77463: Tue Oct 8 10:39:45 2024 00:08:25.372 read: IOPS=17.4k, BW=68.0MiB/s (71.3MB/s)(136MiB/2001msec) 00:08:25.372 slat (nsec): min=4874, max=65379, avg=6310.24, stdev=3116.08 00:08:25.372 clat (usec): min=304, max=10214, avg=3640.53, stdev=1082.20 00:08:25.372 lat (usec): min=310, max=10226, avg=3646.84, stdev=1083.40 00:08:25.372 clat percentiles (usec): 00:08:25.372 | 1.00th=[ 2057], 5.00th=[ 2671], 10.00th=[ 2802], 20.00th=[ 2933], 00:08:25.372 | 30.00th=[ 3032], 40.00th=[ 3130], 50.00th=[ 3261], 60.00th=[ 3392], 00:08:25.372 | 70.00th=[ 3687], 80.00th=[ 4293], 90.00th=[ 5211], 95.00th=[ 6063], 00:08:25.372 | 99.00th=[ 7308], 99.50th=[ 7898], 99.90th=[ 9110], 99.95th=[ 9634], 00:08:25.372 | 99.99th=[10028] 00:08:25.372 bw ( KiB/s): min=68784, max=69453, per=99.26%, avg=69143.00, stdev=337.18, samples=3 00:08:25.372 iops : min=17196, max=17363, avg=17285.67, stdev=84.18, samples=3 00:08:25.372 write: IOPS=17.4k, BW=68.1MiB/s (71.4MB/s)(136MiB/2001msec); 0 zone resets 00:08:25.372 slat (usec): min=4, max=131, avg= 6.43, stdev= 3.09 00:08:25.372 clat (usec): min=261, max=10380, avg=3679.78, stdev=1100.79 00:08:25.372 lat (usec): min=267, max=10386, avg=3686.21, stdev=1101.92 00:08:25.372 clat percentiles (usec): 00:08:25.372 | 1.00th=[ 2073], 5.00th=[ 2704], 10.00th=[ 2802], 20.00th=[ 2966], 00:08:25.372 | 30.00th=[ 3064], 40.00th=[ 3163], 50.00th=[ 3261], 60.00th=[ 3425], 00:08:25.372 | 70.00th=[ 3720], 80.00th=[ 4359], 90.00th=[ 5342], 95.00th=[ 6128], 00:08:25.372 | 99.00th=[ 7373], 99.50th=[ 7963], 99.90th=[ 9372], 99.95th=[ 9765], 00:08:25.372 | 99.99th=[10159] 00:08:25.372 bw ( KiB/s): min=68720, max=69405, per=99.04%, avg=69076.33, stdev=343.34, samples=3 00:08:25.372 iops : min=17180, max=17351, avg=17269.00, stdev=85.71, samples=3 00:08:25.372 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:08:25.372 lat (msec) : 2=0.81%, 4=75.00%, 10=24.14%, 20=0.02% 00:08:25.372 cpu : usr=98.65%, sys=0.20%, ctx=7, majf=0, minf=624 00:08:25.372 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:25.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:25.372 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:25.372 issued rwts: total=34847,34889,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:25.372 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:25.372 00:08:25.372 Run status group 0 (all jobs): 00:08:25.372 READ: bw=68.0MiB/s (71.3MB/s), 68.0MiB/s-68.0MiB/s (71.3MB/s-71.3MB/s), io=136MiB (143MB), run=2001-2001msec 00:08:25.372 WRITE: bw=68.1MiB/s (71.4MB/s), 68.1MiB/s-68.1MiB/s (71.4MB/s-71.4MB/s), io=136MiB (143MB), run=2001-2001msec 00:08:25.372 ----------------------------------------------------- 00:08:25.372 Suppressions used: 00:08:25.372 count bytes template 00:08:25.372 1 32 /usr/src/fio/parse.c 00:08:25.372 1 8 libtcmalloc_minimal.so 00:08:25.372 ----------------------------------------------------- 00:08:25.372 00:08:25.372 10:39:45 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:25.372 10:39:45 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:25.372 10:39:45 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:25.372 10:39:45 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:25.372 10:39:45 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:25.372 10:39:45 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:25.372 10:39:45 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:25.372 10:39:45 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:25.372 10:39:45 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:25.373 10:39:45 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:08:25.373 10:39:45 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:25.373 10:39:45 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:08:25.373 10:39:45 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:25.373 10:39:45 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:08:25.373 10:39:45 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:08:25.373 10:39:45 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:08:25.373 10:39:45 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:25.373 10:39:45 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:08:25.373 10:39:45 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:08:25.373 10:39:45 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:25.373 10:39:45 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:25.373 10:39:45 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:08:25.373 10:39:45 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:25.373 10:39:45 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:25.373 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:25.373 fio-3.35 00:08:25.373 Starting 1 thread 00:08:31.956 00:08:31.956 test: (groupid=0, jobs=1): err= 0: pid=77524: Tue Oct 8 10:39:52 2024 00:08:31.956 read: IOPS=19.6k, BW=76.7MiB/s (80.5MB/s)(154MiB/2001msec) 00:08:31.956 slat (nsec): min=4224, max=68074, avg=5440.52, stdev=2818.81 00:08:31.956 clat (usec): min=262, max=11625, avg=3245.13, stdev=1058.90 00:08:31.956 lat (usec): min=267, max=11661, avg=3250.57, stdev=1060.11 00:08:31.956 clat percentiles (usec): 00:08:31.956 | 1.00th=[ 1958], 5.00th=[ 2311], 10.00th=[ 2409], 20.00th=[ 2540], 00:08:31.956 | 30.00th=[ 2606], 40.00th=[ 2704], 50.00th=[ 2802], 60.00th=[ 2966], 00:08:31.956 | 70.00th=[ 3294], 80.00th=[ 3982], 90.00th=[ 4948], 95.00th=[ 5604], 00:08:31.956 | 99.00th=[ 6521], 99.50th=[ 6915], 99.90th=[ 8225], 99.95th=[ 9634], 00:08:31.956 | 99.99th=[11600] 00:08:31.956 bw ( KiB/s): min=78192, max=83208, per=100.00%, avg=79997.33, stdev=2787.70, samples=3 00:08:31.956 iops : min=19548, max=20802, avg=19999.33, stdev=696.93, samples=3 00:08:31.956 write: IOPS=19.6k, BW=76.6MiB/s (80.4MB/s)(153MiB/2001msec); 0 zone resets 00:08:31.956 slat (nsec): min=4289, max=79279, avg=5551.57, stdev=2861.14 00:08:31.956 clat (usec): min=270, max=11562, avg=3254.70, stdev=1068.44 00:08:31.956 lat (usec): min=275, max=11573, avg=3260.25, stdev=1069.63 00:08:31.956 clat percentiles (usec): 00:08:31.956 | 1.00th=[ 1909], 5.00th=[ 2343], 10.00th=[ 2442], 20.00th=[ 2540], 00:08:31.956 | 30.00th=[ 2638], 40.00th=[ 2704], 50.00th=[ 2835], 60.00th=[ 2966], 00:08:31.956 | 70.00th=[ 3294], 80.00th=[ 3949], 90.00th=[ 4948], 95.00th=[ 5604], 00:08:31.956 | 99.00th=[ 6587], 99.50th=[ 6980], 99.90th=[ 8717], 99.95th=[ 9765], 00:08:31.956 | 99.99th=[11469] 00:08:31.956 bw ( KiB/s): min=78040, max=83464, per=100.00%, avg=80024.00, stdev=2990.80, samples=3 00:08:31.956 iops : min=19510, max=20866, avg=20006.00, stdev=747.70, samples=3 00:08:31.956 lat (usec) : 500=0.02%, 1000=0.01% 00:08:31.956 lat (msec) : 2=1.15%, 4=79.28%, 10=19.52%, 20=0.03% 00:08:31.956 cpu : usr=98.95%, sys=0.05%, ctx=9, majf=0, minf=625 00:08:31.956 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:31.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:31.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:31.956 issued rwts: total=39314,39258,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:31.956 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:31.956 00:08:31.956 Run status group 0 (all jobs): 00:08:31.956 READ: bw=76.7MiB/s (80.5MB/s), 76.7MiB/s-76.7MiB/s (80.5MB/s-80.5MB/s), io=154MiB (161MB), run=2001-2001msec 00:08:31.956 WRITE: bw=76.6MiB/s (80.4MB/s), 76.6MiB/s-76.6MiB/s (80.4MB/s-80.4MB/s), io=153MiB (161MB), run=2001-2001msec 00:08:31.956 ----------------------------------------------------- 00:08:31.956 Suppressions used: 00:08:31.956 count bytes template 00:08:31.956 1 32 /usr/src/fio/parse.c 00:08:31.956 1 8 libtcmalloc_minimal.so 00:08:31.957 ----------------------------------------------------- 00:08:31.957 00:08:32.261 ************************************ 00:08:32.261 END TEST nvme_fio 00:08:32.261 ************************************ 00:08:32.261 10:39:52 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:32.261 10:39:52 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:08:32.261 00:08:32.261 real 0m26.080s 00:08:32.261 user 0m17.011s 00:08:32.261 sys 0m15.520s 00:08:32.261 10:39:52 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:32.261 10:39:52 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:08:32.261 ************************************ 00:08:32.261 END TEST nvme 00:08:32.261 ************************************ 00:08:32.261 00:08:32.261 real 1m33.473s 00:08:32.261 user 3m31.242s 00:08:32.261 sys 0m26.053s 00:08:32.261 10:39:52 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:32.261 10:39:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:32.261 10:39:52 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:08:32.261 10:39:52 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:08:32.261 10:39:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:32.261 10:39:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:32.261 10:39:52 -- common/autotest_common.sh@10 -- # set +x 00:08:32.261 ************************************ 00:08:32.261 START TEST nvme_scc 00:08:32.261 ************************************ 00:08:32.261 10:39:52 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:08:32.261 * Looking for test storage... 00:08:32.261 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:32.261 10:39:52 nvme_scc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:32.261 10:39:52 nvme_scc -- common/autotest_common.sh@1681 -- # lcov --version 00:08:32.261 10:39:52 nvme_scc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:32.261 10:39:52 nvme_scc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:32.261 10:39:52 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:32.261 10:39:52 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:32.261 10:39:52 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:32.261 10:39:52 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:08:32.261 10:39:52 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:08:32.261 10:39:52 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:08:32.261 10:39:52 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:08:32.261 10:39:52 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:08:32.261 10:39:52 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:08:32.261 10:39:52 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:08:32.261 10:39:52 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:32.261 10:39:52 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:08:32.261 10:39:52 nvme_scc -- scripts/common.sh@345 -- # : 1 00:08:32.261 10:39:52 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:32.261 10:39:52 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:32.261 10:39:52 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:08:32.261 10:39:52 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:08:32.261 10:39:52 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:32.261 10:39:52 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:08:32.261 10:39:52 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:32.261 10:39:52 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:08:32.261 10:39:52 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:08:32.261 10:39:52 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:32.261 10:39:52 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:08:32.261 10:39:52 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:32.261 10:39:52 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:32.261 10:39:52 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:32.261 10:39:52 nvme_scc -- scripts/common.sh@368 -- # return 0 00:08:32.261 10:39:52 nvme_scc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:32.261 10:39:52 nvme_scc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:32.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.261 --rc genhtml_branch_coverage=1 00:08:32.261 --rc genhtml_function_coverage=1 00:08:32.261 --rc genhtml_legend=1 00:08:32.261 --rc geninfo_all_blocks=1 00:08:32.261 --rc geninfo_unexecuted_blocks=1 00:08:32.261 00:08:32.261 ' 00:08:32.261 10:39:52 nvme_scc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:32.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.261 --rc genhtml_branch_coverage=1 00:08:32.261 --rc genhtml_function_coverage=1 00:08:32.261 --rc genhtml_legend=1 00:08:32.261 --rc geninfo_all_blocks=1 00:08:32.261 --rc geninfo_unexecuted_blocks=1 00:08:32.261 00:08:32.261 ' 00:08:32.261 10:39:52 nvme_scc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:32.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.261 --rc genhtml_branch_coverage=1 00:08:32.261 --rc genhtml_function_coverage=1 00:08:32.261 --rc genhtml_legend=1 00:08:32.261 --rc geninfo_all_blocks=1 00:08:32.261 --rc geninfo_unexecuted_blocks=1 00:08:32.261 00:08:32.261 ' 00:08:32.261 10:39:52 nvme_scc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:32.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.261 --rc genhtml_branch_coverage=1 00:08:32.261 --rc genhtml_function_coverage=1 00:08:32.261 --rc genhtml_legend=1 00:08:32.261 --rc geninfo_all_blocks=1 00:08:32.261 --rc geninfo_unexecuted_blocks=1 00:08:32.261 00:08:32.261 ' 00:08:32.261 10:39:52 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:32.261 10:39:52 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:32.261 10:39:52 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:08:32.261 10:39:52 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:08:32.261 10:39:52 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:32.261 10:39:52 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:08:32.523 10:39:52 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.524 10:39:52 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.524 10:39:52 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.524 10:39:52 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.524 10:39:52 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.524 10:39:52 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.524 10:39:52 nvme_scc -- paths/export.sh@5 -- # export PATH 00:08:32.524 10:39:52 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.524 10:39:52 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:08:32.524 10:39:52 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:08:32.524 10:39:52 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:08:32.524 10:39:52 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:08:32.524 10:39:52 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:08:32.524 10:39:52 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:08:32.524 10:39:52 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:08:32.524 10:39:52 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:08:32.524 10:39:52 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:08:32.524 10:39:52 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:32.524 10:39:52 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:08:32.524 10:39:52 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:08:32.524 10:39:52 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:08:32.524 10:39:52 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:32.786 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:32.786 Waiting for block devices as requested 00:08:32.786 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:33.046 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:33.046 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:33.046 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:38.343 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:38.343 10:39:58 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:08:38.343 10:39:58 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:08:38.343 10:39:58 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:38.343 10:39:58 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:08:38.343 10:39:58 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:08:38.343 10:39:58 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:08:38.343 10:39:58 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:38.343 10:39:58 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:08:38.343 10:39:58 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:38.343 10:39:58 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:38.343 10:39:58 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:08:38.343 10:39:58 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:08:38.343 10:39:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:08:38.343 10:39:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:38.343 10:39:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:08:38.343 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.343 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.343 10:39:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:08:38.343 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:38.343 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.343 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.343 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:38.343 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:08:38.343 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:08:38.343 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.344 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.345 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:08:38.346 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.347 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:08:38.348 10:39:58 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:38.348 10:39:58 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:08:38.348 10:39:58 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:38.348 10:39:58 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.348 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:08:38.349 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.350 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.351 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:08:38.352 10:39:58 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:38.352 10:39:58 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:08:38.352 10:39:58 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:38.352 10:39:58 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:08:38.352 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.353 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:08:38.354 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.355 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:08:38.356 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.357 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:38.358 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.622 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.623 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:08:38.624 10:39:58 nvme_scc -- scripts/common.sh@18 -- # local i 00:08:38.624 10:39:58 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:08:38.624 10:39:58 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:38.624 10:39:58 nvme_scc -- scripts/common.sh@27 -- # return 0 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:08:38.624 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.625 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:38.626 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:08:38.627 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:08:38.627 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.627 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.627 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:38.627 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:38.627 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:38.627 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.627 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.627 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:38.627 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:38.627 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:38.627 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.627 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.627 10:39:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:38.627 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:08:38.627 10:39:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:08:38.627 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:08:38.627 10:39:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:08:38.627 10:39:58 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:08:38.627 10:39:58 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:08:38.627 10:39:58 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:08:38.627 10:39:58 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:08:38.627 10:39:58 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:08:38.627 10:39:58 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:08:38.627 10:39:58 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:08:38.627 10:39:58 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:08:38.627 10:39:58 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:08:38.627 10:39:59 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:08:38.627 10:39:59 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:08:38.627 10:39:59 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:08:38.627 10:39:59 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:39.199 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:39.461 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:39.461 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:39.722 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:39.722 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:39.722 10:40:00 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:08:39.722 10:40:00 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:39.722 10:40:00 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.722 10:40:00 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:08:39.722 ************************************ 00:08:39.722 START TEST nvme_simple_copy 00:08:39.722 ************************************ 00:08:39.722 10:40:00 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:08:39.984 Initializing NVMe Controllers 00:08:39.984 Attaching to 0000:00:10.0 00:08:39.984 Controller supports SCC. Attached to 0000:00:10.0 00:08:39.984 Namespace ID: 1 size: 6GB 00:08:39.984 Initialization complete. 00:08:39.984 00:08:39.984 Controller QEMU NVMe Ctrl (12340 ) 00:08:39.984 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:08:39.984 Namespace Block Size:4096 00:08:39.984 Writing LBAs 0 to 63 with Random Data 00:08:39.984 Copied LBAs from 0 - 63 to the Destination LBA 256 00:08:39.984 LBAs matching Written Data: 64 00:08:39.984 00:08:39.984 real 0m0.254s 00:08:39.984 user 0m0.084s 00:08:39.984 sys 0m0.068s 00:08:39.984 ************************************ 00:08:39.984 END TEST nvme_simple_copy 00:08:39.984 ************************************ 00:08:39.984 10:40:00 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.984 10:40:00 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:08:39.984 ************************************ 00:08:39.984 END TEST nvme_scc 00:08:39.984 ************************************ 00:08:39.984 00:08:39.984 real 0m7.805s 00:08:39.984 user 0m1.103s 00:08:39.984 sys 0m1.381s 00:08:39.984 10:40:00 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.984 10:40:00 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:08:39.984 10:40:00 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:08:39.984 10:40:00 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:08:39.984 10:40:00 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:08:39.984 10:40:00 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:08:39.984 10:40:00 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:08:39.984 10:40:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:39.984 10:40:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.984 10:40:00 -- common/autotest_common.sh@10 -- # set +x 00:08:39.984 ************************************ 00:08:39.984 START TEST nvme_fdp 00:08:39.984 ************************************ 00:08:39.984 10:40:00 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:08:40.246 * Looking for test storage... 00:08:40.246 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:40.246 10:40:00 nvme_fdp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:40.246 10:40:00 nvme_fdp -- common/autotest_common.sh@1681 -- # lcov --version 00:08:40.246 10:40:00 nvme_fdp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:40.246 10:40:00 nvme_fdp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:40.246 10:40:00 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:40.246 10:40:00 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:40.246 10:40:00 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:40.246 10:40:00 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:08:40.246 10:40:00 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:08:40.246 10:40:00 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:08:40.246 10:40:00 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:08:40.246 10:40:00 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:08:40.246 10:40:00 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:08:40.246 10:40:00 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:08:40.246 10:40:00 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:40.246 10:40:00 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:08:40.246 10:40:00 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:08:40.246 10:40:00 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:40.246 10:40:00 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:40.246 10:40:00 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:08:40.246 10:40:00 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:08:40.246 10:40:00 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:40.246 10:40:00 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:08:40.246 10:40:00 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:40.246 10:40:00 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:08:40.246 10:40:00 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:08:40.246 10:40:00 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:40.246 10:40:00 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:08:40.246 10:40:00 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:40.246 10:40:00 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:40.246 10:40:00 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:40.246 10:40:00 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:08:40.246 10:40:00 nvme_fdp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:40.246 10:40:00 nvme_fdp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:40.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.246 --rc genhtml_branch_coverage=1 00:08:40.246 --rc genhtml_function_coverage=1 00:08:40.246 --rc genhtml_legend=1 00:08:40.246 --rc geninfo_all_blocks=1 00:08:40.246 --rc geninfo_unexecuted_blocks=1 00:08:40.246 00:08:40.246 ' 00:08:40.246 10:40:00 nvme_fdp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:40.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.246 --rc genhtml_branch_coverage=1 00:08:40.246 --rc genhtml_function_coverage=1 00:08:40.246 --rc genhtml_legend=1 00:08:40.246 --rc geninfo_all_blocks=1 00:08:40.246 --rc geninfo_unexecuted_blocks=1 00:08:40.246 00:08:40.246 ' 00:08:40.246 10:40:00 nvme_fdp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:40.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.246 --rc genhtml_branch_coverage=1 00:08:40.246 --rc genhtml_function_coverage=1 00:08:40.246 --rc genhtml_legend=1 00:08:40.246 --rc geninfo_all_blocks=1 00:08:40.246 --rc geninfo_unexecuted_blocks=1 00:08:40.246 00:08:40.246 ' 00:08:40.246 10:40:00 nvme_fdp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:40.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.246 --rc genhtml_branch_coverage=1 00:08:40.246 --rc genhtml_function_coverage=1 00:08:40.246 --rc genhtml_legend=1 00:08:40.246 --rc geninfo_all_blocks=1 00:08:40.246 --rc geninfo_unexecuted_blocks=1 00:08:40.246 00:08:40.246 ' 00:08:40.246 10:40:00 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:40.246 10:40:00 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:40.246 10:40:00 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:08:40.246 10:40:00 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:08:40.246 10:40:00 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:40.246 10:40:00 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:08:40.246 10:40:00 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.246 10:40:00 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.246 10:40:00 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.246 10:40:00 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.246 10:40:00 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.246 10:40:00 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.246 10:40:00 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:08:40.246 10:40:00 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.246 10:40:00 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:08:40.246 10:40:00 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:08:40.246 10:40:00 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:08:40.246 10:40:00 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:08:40.246 10:40:00 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:08:40.246 10:40:00 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:08:40.246 10:40:00 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:08:40.246 10:40:00 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:08:40.246 10:40:00 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:08:40.246 10:40:00 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:40.246 10:40:00 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:40.508 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:40.770 Waiting for block devices as requested 00:08:40.770 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:40.770 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:40.770 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:41.031 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:46.387 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:46.387 10:40:06 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:08:46.387 10:40:06 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:08:46.387 10:40:06 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:46.387 10:40:06 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:08:46.387 10:40:06 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:08:46.387 10:40:06 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:08:46.387 10:40:06 nvme_fdp -- scripts/common.sh@18 -- # local i 00:08:46.387 10:40:06 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:08:46.387 10:40:06 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:46.387 10:40:06 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:08:46.387 10:40:06 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:08:46.387 10:40:06 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:08:46.387 10:40:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:08:46.387 10:40:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:46.387 10:40:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:08:46.387 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.387 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.388 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.389 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:46.390 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.391 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:08:46.392 10:40:06 nvme_fdp -- scripts/common.sh@18 -- # local i 00:08:46.392 10:40:06 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:08:46.392 10:40:06 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:46.392 10:40:06 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.392 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.393 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.394 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:08:46.395 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:08:46.396 10:40:06 nvme_fdp -- scripts/common.sh@18 -- # local i 00:08:46.396 10:40:06 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:08:46.396 10:40:06 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:46.396 10:40:06 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:08:46.396 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.397 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:08:46.398 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.399 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:46.400 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:08:46.401 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.402 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:08:46.403 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:08:46.404 10:40:06 nvme_fdp -- scripts/common.sh@18 -- # local i 00:08:46.404 10:40:06 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:08:46.404 10:40:06 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:46.404 10:40:06 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.404 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:08:46.405 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:08:46.406 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:08:46.407 10:40:06 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:08:46.407 10:40:06 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:08:46.407 10:40:06 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:08:46.407 10:40:06 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:08:46.407 10:40:06 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:46.981 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:47.242 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:47.242 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:47.503 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:47.503 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:47.503 10:40:07 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:08:47.503 10:40:07 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:47.503 10:40:07 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:47.503 10:40:07 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:08:47.503 ************************************ 00:08:47.503 START TEST nvme_flexible_data_placement 00:08:47.503 ************************************ 00:08:47.503 10:40:07 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:08:47.765 Initializing NVMe Controllers 00:08:47.765 Attaching to 0000:00:13.0 00:08:47.765 Controller supports FDP Attached to 0000:00:13.0 00:08:47.765 Namespace ID: 1 Endurance Group ID: 1 00:08:47.765 Initialization complete. 00:08:47.765 00:08:47.765 ================================== 00:08:47.765 == FDP tests for Namespace: #01 == 00:08:47.765 ================================== 00:08:47.765 00:08:47.765 Get Feature: FDP: 00:08:47.765 ================= 00:08:47.765 Enabled: Yes 00:08:47.765 FDP configuration Index: 0 00:08:47.765 00:08:47.765 FDP configurations log page 00:08:47.765 =========================== 00:08:47.765 Number of FDP configurations: 1 00:08:47.765 Version: 0 00:08:47.765 Size: 112 00:08:47.765 FDP Configuration Descriptor: 0 00:08:47.765 Descriptor Size: 96 00:08:47.765 Reclaim Group Identifier format: 2 00:08:47.765 FDP Volatile Write Cache: Not Present 00:08:47.765 FDP Configuration: Valid 00:08:47.765 Vendor Specific Size: 0 00:08:47.765 Number of Reclaim Groups: 2 00:08:47.765 Number of Recalim Unit Handles: 8 00:08:47.765 Max Placement Identifiers: 128 00:08:47.765 Number of Namespaces Suppprted: 256 00:08:47.765 Reclaim unit Nominal Size: 6000000 bytes 00:08:47.765 Estimated Reclaim Unit Time Limit: Not Reported 00:08:47.765 RUH Desc #000: RUH Type: Initially Isolated 00:08:47.765 RUH Desc #001: RUH Type: Initially Isolated 00:08:47.765 RUH Desc #002: RUH Type: Initially Isolated 00:08:47.765 RUH Desc #003: RUH Type: Initially Isolated 00:08:47.765 RUH Desc #004: RUH Type: Initially Isolated 00:08:47.765 RUH Desc #005: RUH Type: Initially Isolated 00:08:47.765 RUH Desc #006: RUH Type: Initially Isolated 00:08:47.765 RUH Desc #007: RUH Type: Initially Isolated 00:08:47.765 00:08:47.765 FDP reclaim unit handle usage log page 00:08:47.765 ====================================== 00:08:47.765 Number of Reclaim Unit Handles: 8 00:08:47.765 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:47.765 RUH Usage Desc #001: RUH Attributes: Unused 00:08:47.765 RUH Usage Desc #002: RUH Attributes: Unused 00:08:47.765 RUH Usage Desc #003: RUH Attributes: Unused 00:08:47.765 RUH Usage Desc #004: RUH Attributes: Unused 00:08:47.765 RUH Usage Desc #005: RUH Attributes: Unused 00:08:47.765 RUH Usage Desc #006: RUH Attributes: Unused 00:08:47.765 RUH Usage Desc #007: RUH Attributes: Unused 00:08:47.765 00:08:47.765 FDP statistics log page 00:08:47.765 ======================= 00:08:47.765 Host bytes with metadata written: 1660047360 00:08:47.765 Media bytes with metadata written: 1660813312 00:08:47.765 Media bytes erased: 0 00:08:47.765 00:08:47.765 FDP Reclaim unit handle status 00:08:47.765 ============================== 00:08:47.765 Number of RUHS descriptors: 2 00:08:47.765 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000030db 00:08:47.765 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:08:47.765 00:08:47.765 FDP write on placement id: 0 success 00:08:47.765 00:08:47.765 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:08:47.765 00:08:47.765 IO mgmt send: RUH update for Placement ID: #0 Success 00:08:47.765 00:08:47.765 Get Feature: FDP Events for Placement handle: #0 00:08:47.765 ======================== 00:08:47.765 Number of FDP Events: 6 00:08:47.765 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:08:47.765 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:08:47.765 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:08:47.765 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:08:47.765 FDP Event: #4 Type: Media Reallocated Enabled: No 00:08:47.765 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:08:47.765 00:08:47.765 FDP events log page 00:08:47.765 =================== 00:08:47.765 Number of FDP events: 1 00:08:47.765 FDP Event #0: 00:08:47.765 Event Type: RU Not Written to Capacity 00:08:47.765 Placement Identifier: Valid 00:08:47.765 NSID: Valid 00:08:47.765 Location: Valid 00:08:47.765 Placement Identifier: 0 00:08:47.765 Event Timestamp: 4 00:08:47.765 Namespace Identifier: 1 00:08:47.765 Reclaim Group Identifier: 0 00:08:47.765 Reclaim Unit Handle Identifier: 0 00:08:47.765 00:08:47.765 FDP test passed 00:08:47.765 00:08:47.765 real 0m0.206s 00:08:47.765 user 0m0.044s 00:08:47.765 sys 0m0.060s 00:08:47.765 10:40:08 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:47.765 ************************************ 00:08:47.765 END TEST nvme_flexible_data_placement 00:08:47.765 ************************************ 00:08:47.765 10:40:08 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:08:47.765 ************************************ 00:08:47.765 END TEST nvme_fdp 00:08:47.765 ************************************ 00:08:47.765 00:08:47.765 real 0m7.690s 00:08:47.765 user 0m0.998s 00:08:47.765 sys 0m1.399s 00:08:47.765 10:40:08 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:47.766 10:40:08 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:08:47.766 10:40:08 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:08:47.766 10:40:08 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:08:47.766 10:40:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:47.766 10:40:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:47.766 10:40:08 -- common/autotest_common.sh@10 -- # set +x 00:08:47.766 ************************************ 00:08:47.766 START TEST nvme_rpc 00:08:47.766 ************************************ 00:08:47.766 10:40:08 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:08:47.766 * Looking for test storage... 00:08:47.766 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:48.028 10:40:08 nvme_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:48.028 10:40:08 nvme_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:08:48.028 10:40:08 nvme_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:48.028 10:40:08 nvme_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:48.028 10:40:08 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:48.028 10:40:08 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:48.028 10:40:08 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:48.028 10:40:08 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:48.028 10:40:08 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:48.028 10:40:08 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:48.028 10:40:08 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:48.028 10:40:08 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:48.028 10:40:08 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:48.028 10:40:08 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:48.028 10:40:08 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:48.028 10:40:08 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:48.028 10:40:08 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:08:48.028 10:40:08 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:48.028 10:40:08 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:48.028 10:40:08 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:48.028 10:40:08 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:08:48.028 10:40:08 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:48.028 10:40:08 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:08:48.028 10:40:08 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:48.028 10:40:08 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:48.028 10:40:08 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:08:48.028 10:40:08 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:48.028 10:40:08 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:08:48.028 10:40:08 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:48.028 10:40:08 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:48.028 10:40:08 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:48.028 10:40:08 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:08:48.028 10:40:08 nvme_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:48.028 10:40:08 nvme_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:48.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.028 --rc genhtml_branch_coverage=1 00:08:48.028 --rc genhtml_function_coverage=1 00:08:48.028 --rc genhtml_legend=1 00:08:48.028 --rc geninfo_all_blocks=1 00:08:48.028 --rc geninfo_unexecuted_blocks=1 00:08:48.028 00:08:48.028 ' 00:08:48.028 10:40:08 nvme_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:48.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.028 --rc genhtml_branch_coverage=1 00:08:48.028 --rc genhtml_function_coverage=1 00:08:48.028 --rc genhtml_legend=1 00:08:48.028 --rc geninfo_all_blocks=1 00:08:48.028 --rc geninfo_unexecuted_blocks=1 00:08:48.028 00:08:48.028 ' 00:08:48.028 10:40:08 nvme_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:48.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.028 --rc genhtml_branch_coverage=1 00:08:48.028 --rc genhtml_function_coverage=1 00:08:48.028 --rc genhtml_legend=1 00:08:48.028 --rc geninfo_all_blocks=1 00:08:48.028 --rc geninfo_unexecuted_blocks=1 00:08:48.028 00:08:48.028 ' 00:08:48.028 10:40:08 nvme_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:48.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.028 --rc genhtml_branch_coverage=1 00:08:48.028 --rc genhtml_function_coverage=1 00:08:48.028 --rc genhtml_legend=1 00:08:48.028 --rc geninfo_all_blocks=1 00:08:48.028 --rc geninfo_unexecuted_blocks=1 00:08:48.028 00:08:48.028 ' 00:08:48.028 10:40:08 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:48.028 10:40:08 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:08:48.028 10:40:08 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:08:48.028 10:40:08 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:08:48.028 10:40:08 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:08:48.028 10:40:08 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:08:48.028 10:40:08 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:48.028 10:40:08 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:08:48.028 10:40:08 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:48.028 10:40:08 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:48.028 10:40:08 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:48.028 10:40:08 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:48.028 10:40:08 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:48.028 10:40:08 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:08:48.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.028 10:40:08 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:08:48.028 10:40:08 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=78882 00:08:48.028 10:40:08 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:08:48.028 10:40:08 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:08:48.028 10:40:08 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 78882 00:08:48.028 10:40:08 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 78882 ']' 00:08:48.028 10:40:08 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.028 10:40:08 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:48.028 10:40:08 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.028 10:40:08 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:48.028 10:40:08 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:48.028 [2024-10-08 10:40:08.549863] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:08:48.028 [2024-10-08 10:40:08.549979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78882 ] 00:08:48.289 [2024-10-08 10:40:08.678987] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:48.289 [2024-10-08 10:40:08.696537] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:48.289 [2024-10-08 10:40:08.730001] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.289 [2024-10-08 10:40:08.730037] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.860 10:40:09 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:48.860 10:40:09 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:48.860 10:40:09 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:08:49.121 Nvme0n1 00:08:49.121 10:40:09 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:08:49.121 10:40:09 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:08:49.382 request: 00:08:49.382 { 00:08:49.382 "bdev_name": "Nvme0n1", 00:08:49.382 "filename": "non_existing_file", 00:08:49.382 "method": "bdev_nvme_apply_firmware", 00:08:49.382 "req_id": 1 00:08:49.382 } 00:08:49.382 Got JSON-RPC error response 00:08:49.382 response: 00:08:49.382 { 00:08:49.382 "code": -32603, 00:08:49.382 "message": "open file failed." 00:08:49.382 } 00:08:49.382 10:40:09 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:08:49.382 10:40:09 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:08:49.382 10:40:09 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:08:49.644 10:40:10 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:49.644 10:40:10 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 78882 00:08:49.644 10:40:10 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 78882 ']' 00:08:49.644 10:40:10 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 78882 00:08:49.644 10:40:10 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:08:49.644 10:40:10 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:49.644 10:40:10 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78882 00:08:49.644 killing process with pid 78882 00:08:49.644 10:40:10 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:49.644 10:40:10 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:49.644 10:40:10 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78882' 00:08:49.644 10:40:10 nvme_rpc -- common/autotest_common.sh@969 -- # kill 78882 00:08:49.644 10:40:10 nvme_rpc -- common/autotest_common.sh@974 -- # wait 78882 00:08:49.905 ************************************ 00:08:49.905 END TEST nvme_rpc 00:08:49.905 ************************************ 00:08:49.905 00:08:49.905 real 0m2.081s 00:08:49.905 user 0m4.043s 00:08:49.905 sys 0m0.485s 00:08:49.905 10:40:10 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:49.905 10:40:10 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:49.905 10:40:10 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:08:49.905 10:40:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:49.905 10:40:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:49.905 10:40:10 -- common/autotest_common.sh@10 -- # set +x 00:08:49.905 ************************************ 00:08:49.905 START TEST nvme_rpc_timeouts 00:08:49.905 ************************************ 00:08:49.905 10:40:10 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:08:49.905 * Looking for test storage... 00:08:50.167 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:50.167 10:40:10 nvme_rpc_timeouts -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:50.167 10:40:10 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lcov --version 00:08:50.167 10:40:10 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:50.167 10:40:10 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:50.167 10:40:10 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:50.167 10:40:10 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:50.167 10:40:10 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:50.167 10:40:10 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:08:50.167 10:40:10 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:08:50.167 10:40:10 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:08:50.167 10:40:10 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:08:50.167 10:40:10 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:08:50.167 10:40:10 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:08:50.167 10:40:10 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:08:50.167 10:40:10 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:50.167 10:40:10 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:08:50.167 10:40:10 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:08:50.167 10:40:10 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:50.167 10:40:10 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:50.167 10:40:10 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:08:50.167 10:40:10 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:08:50.167 10:40:10 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:50.167 10:40:10 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:08:50.167 10:40:10 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:08:50.167 10:40:10 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:08:50.167 10:40:10 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:08:50.167 10:40:10 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:50.167 10:40:10 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:08:50.167 10:40:10 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:08:50.167 10:40:10 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:50.167 10:40:10 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:50.167 10:40:10 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:08:50.167 10:40:10 nvme_rpc_timeouts -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:50.167 10:40:10 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:50.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.167 --rc genhtml_branch_coverage=1 00:08:50.167 --rc genhtml_function_coverage=1 00:08:50.167 --rc genhtml_legend=1 00:08:50.167 --rc geninfo_all_blocks=1 00:08:50.167 --rc geninfo_unexecuted_blocks=1 00:08:50.167 00:08:50.167 ' 00:08:50.167 10:40:10 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:50.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.167 --rc genhtml_branch_coverage=1 00:08:50.167 --rc genhtml_function_coverage=1 00:08:50.167 --rc genhtml_legend=1 00:08:50.167 --rc geninfo_all_blocks=1 00:08:50.167 --rc geninfo_unexecuted_blocks=1 00:08:50.167 00:08:50.167 ' 00:08:50.167 10:40:10 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:50.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.167 --rc genhtml_branch_coverage=1 00:08:50.167 --rc genhtml_function_coverage=1 00:08:50.167 --rc genhtml_legend=1 00:08:50.167 --rc geninfo_all_blocks=1 00:08:50.167 --rc geninfo_unexecuted_blocks=1 00:08:50.167 00:08:50.167 ' 00:08:50.167 10:40:10 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:50.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.167 --rc genhtml_branch_coverage=1 00:08:50.167 --rc genhtml_function_coverage=1 00:08:50.167 --rc genhtml_legend=1 00:08:50.167 --rc geninfo_all_blocks=1 00:08:50.167 --rc geninfo_unexecuted_blocks=1 00:08:50.167 00:08:50.167 ' 00:08:50.167 10:40:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:50.167 10:40:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_78936 00:08:50.167 10:40:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_78936 00:08:50.167 10:40:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=78968 00:08:50.167 10:40:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:08:50.167 10:40:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 78968 00:08:50.168 10:40:10 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 78968 ']' 00:08:50.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.168 10:40:10 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.168 10:40:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:08:50.168 10:40:10 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:50.168 10:40:10 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.168 10:40:10 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:50.168 10:40:10 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:08:50.168 [2024-10-08 10:40:10.633488] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:08:50.168 [2024-10-08 10:40:10.633758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78968 ] 00:08:50.429 [2024-10-08 10:40:10.763268] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:50.429 [2024-10-08 10:40:10.783366] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:50.429 [2024-10-08 10:40:10.816597] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.429 [2024-10-08 10:40:10.816695] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.001 10:40:11 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:51.001 10:40:11 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:08:51.001 10:40:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:08:51.001 Checking default timeout settings: 00:08:51.001 10:40:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:08:51.262 Making settings changes with rpc: 00:08:51.262 10:40:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:08:51.262 10:40:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:08:51.523 10:40:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:08:51.523 Check default vs. modified settings: 00:08:51.523 10:40:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:08:51.783 10:40:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:08:51.783 10:40:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:08:51.783 10:40:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_78936 00:08:51.783 10:40:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:08:51.783 10:40:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:08:51.783 10:40:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:08:51.783 10:40:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:08:51.783 10:40:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_78936 00:08:51.783 10:40:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:08:51.783 Setting action_on_timeout is changed as expected. 00:08:51.783 10:40:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:08:51.783 10:40:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:08:51.783 10:40:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:08:51.783 10:40:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:08:51.783 10:40:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:08:51.783 10:40:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_78936 00:08:51.783 10:40:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:08:51.783 10:40:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:08:51.783 10:40:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_78936 00:08:51.783 10:40:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:08:51.783 10:40:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:08:51.783 Setting timeout_us is changed as expected. 00:08:51.783 10:40:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:08:51.783 10:40:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:08:51.783 10:40:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:08:51.783 10:40:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:08:51.783 10:40:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_78936 00:08:51.784 10:40:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:08:51.784 10:40:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:08:51.784 10:40:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:08:51.784 10:40:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_78936 00:08:51.784 10:40:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:08:51.784 10:40:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:08:51.784 Setting timeout_admin_us is changed as expected. 00:08:51.784 10:40:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:08:51.784 10:40:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:08:51.784 10:40:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:08:51.784 10:40:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:08:51.784 10:40:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_78936 /tmp/settings_modified_78936 00:08:51.784 10:40:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 78968 00:08:51.784 10:40:12 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 78968 ']' 00:08:51.784 10:40:12 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 78968 00:08:51.784 10:40:12 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:08:51.784 10:40:12 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:51.784 10:40:12 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78968 00:08:52.044 killing process with pid 78968 00:08:52.044 10:40:12 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:52.044 10:40:12 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:52.044 10:40:12 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78968' 00:08:52.044 10:40:12 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 78968 00:08:52.044 10:40:12 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 78968 00:08:52.306 RPC TIMEOUT SETTING TEST PASSED. 00:08:52.306 10:40:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:08:52.306 ************************************ 00:08:52.306 END TEST nvme_rpc_timeouts 00:08:52.306 ************************************ 00:08:52.306 00:08:52.306 real 0m2.232s 00:08:52.306 user 0m4.454s 00:08:52.306 sys 0m0.465s 00:08:52.306 10:40:12 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:52.306 10:40:12 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:08:52.306 10:40:12 -- spdk/autotest.sh@239 -- # uname -s 00:08:52.306 10:40:12 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:08:52.306 10:40:12 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:08:52.306 10:40:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:52.306 10:40:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:52.306 10:40:12 -- common/autotest_common.sh@10 -- # set +x 00:08:52.306 ************************************ 00:08:52.306 START TEST sw_hotplug 00:08:52.306 ************************************ 00:08:52.306 10:40:12 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:08:52.306 * Looking for test storage... 00:08:52.306 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:52.306 10:40:12 sw_hotplug -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:52.306 10:40:12 sw_hotplug -- common/autotest_common.sh@1681 -- # lcov --version 00:08:52.306 10:40:12 sw_hotplug -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:52.306 10:40:12 sw_hotplug -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:52.306 10:40:12 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:52.306 10:40:12 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:52.306 10:40:12 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:52.306 10:40:12 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:08:52.306 10:40:12 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:08:52.306 10:40:12 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:08:52.306 10:40:12 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:08:52.306 10:40:12 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:08:52.306 10:40:12 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:08:52.306 10:40:12 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:08:52.306 10:40:12 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:52.306 10:40:12 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:08:52.306 10:40:12 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:08:52.306 10:40:12 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:52.306 10:40:12 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:52.306 10:40:12 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:08:52.306 10:40:12 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:08:52.306 10:40:12 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:52.306 10:40:12 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:08:52.306 10:40:12 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:08:52.306 10:40:12 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:08:52.306 10:40:12 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:08:52.306 10:40:12 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:52.306 10:40:12 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:08:52.306 10:40:12 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:08:52.306 10:40:12 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:52.306 10:40:12 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:52.306 10:40:12 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:08:52.306 10:40:12 sw_hotplug -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:52.306 10:40:12 sw_hotplug -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:52.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.306 --rc genhtml_branch_coverage=1 00:08:52.306 --rc genhtml_function_coverage=1 00:08:52.306 --rc genhtml_legend=1 00:08:52.306 --rc geninfo_all_blocks=1 00:08:52.306 --rc geninfo_unexecuted_blocks=1 00:08:52.306 00:08:52.306 ' 00:08:52.306 10:40:12 sw_hotplug -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:52.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.306 --rc genhtml_branch_coverage=1 00:08:52.306 --rc genhtml_function_coverage=1 00:08:52.306 --rc genhtml_legend=1 00:08:52.306 --rc geninfo_all_blocks=1 00:08:52.306 --rc geninfo_unexecuted_blocks=1 00:08:52.306 00:08:52.306 ' 00:08:52.306 10:40:12 sw_hotplug -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:52.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.306 --rc genhtml_branch_coverage=1 00:08:52.306 --rc genhtml_function_coverage=1 00:08:52.306 --rc genhtml_legend=1 00:08:52.306 --rc geninfo_all_blocks=1 00:08:52.306 --rc geninfo_unexecuted_blocks=1 00:08:52.306 00:08:52.306 ' 00:08:52.306 10:40:12 sw_hotplug -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:52.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.306 --rc genhtml_branch_coverage=1 00:08:52.306 --rc genhtml_function_coverage=1 00:08:52.306 --rc genhtml_legend=1 00:08:52.306 --rc geninfo_all_blocks=1 00:08:52.306 --rc geninfo_unexecuted_blocks=1 00:08:52.306 00:08:52.306 ' 00:08:52.306 10:40:12 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:52.568 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:52.828 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:52.828 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:52.828 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:52.828 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:52.828 10:40:13 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:08:52.828 10:40:13 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:08:52.828 10:40:13 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:08:52.828 10:40:13 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@233 -- # local class 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@18 -- # local i 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@18 -- # local i 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:08:52.828 10:40:13 sw_hotplug -- scripts/common.sh@18 -- # local i 00:08:52.829 10:40:13 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:08:52.829 10:40:13 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:52.829 10:40:13 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:08:52.829 10:40:13 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:08:52.829 10:40:13 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:52.829 10:40:13 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:08:52.829 10:40:13 sw_hotplug -- scripts/common.sh@18 -- # local i 00:08:52.829 10:40:13 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:08:52.829 10:40:13 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:52.829 10:40:13 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:08:52.829 10:40:13 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:08:52.829 10:40:13 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:08:52.829 10:40:13 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:08:52.829 10:40:13 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:08:52.829 10:40:13 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:08:52.829 10:40:13 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:08:52.829 10:40:13 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:08:52.829 10:40:13 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:08:52.829 10:40:13 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:08:52.829 10:40:13 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:08:52.829 10:40:13 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:08:52.829 10:40:13 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:08:52.829 10:40:13 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:08:52.829 10:40:13 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:08:52.829 10:40:13 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:08:52.829 10:40:13 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:08:52.829 10:40:13 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:08:52.829 10:40:13 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:08:52.829 10:40:13 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:08:52.829 10:40:13 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:08:52.829 10:40:13 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:08:52.829 10:40:13 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:08:52.829 10:40:13 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:52.829 10:40:13 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:08:52.829 10:40:13 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:08:52.829 10:40:13 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:53.088 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:53.349 Waiting for block devices as requested 00:08:53.349 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:53.349 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:53.610 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:53.610 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:58.923 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:58.923 10:40:19 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:08:58.923 10:40:19 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:59.184 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:08:59.184 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:59.184 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:08:59.446 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:08:59.707 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:59.707 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:59.707 10:40:20 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:08:59.707 10:40:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:08:59.707 10:40:20 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:08:59.707 10:40:20 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:08:59.707 10:40:20 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=79814 00:08:59.707 10:40:20 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:08:59.707 10:40:20 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:08:59.707 10:40:20 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:08:59.707 10:40:20 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:08:59.707 10:40:20 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:08:59.707 10:40:20 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:08:59.707 10:40:20 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:08:59.707 10:40:20 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:08:59.707 10:40:20 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:08:59.707 10:40:20 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:08:59.707 10:40:20 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:08:59.707 10:40:20 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:08:59.707 10:40:20 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:08:59.707 10:40:20 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:08:59.968 Initializing NVMe Controllers 00:08:59.968 Attaching to 0000:00:10.0 00:08:59.968 Attaching to 0000:00:11.0 00:08:59.968 Attached to 0000:00:10.0 00:08:59.968 Attached to 0000:00:11.0 00:08:59.968 Initialization complete. Starting I/O... 00:08:59.968 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:08:59.969 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:08:59.969 00:09:00.911 QEMU NVMe Ctrl (12340 ): 3056 I/Os completed (+3056) 00:09:00.911 QEMU NVMe Ctrl (12341 ): 3056 I/Os completed (+3056) 00:09:00.911 00:09:01.854 QEMU NVMe Ctrl (12340 ): 6780 I/Os completed (+3724) 00:09:01.854 QEMU NVMe Ctrl (12341 ): 6780 I/Os completed (+3724) 00:09:01.854 00:09:03.241 QEMU NVMe Ctrl (12340 ): 10544 I/Os completed (+3764) 00:09:03.241 QEMU NVMe Ctrl (12341 ): 10544 I/Os completed (+3764) 00:09:03.241 00:09:04.190 QEMU NVMe Ctrl (12340 ): 14324 I/Os completed (+3780) 00:09:04.190 QEMU NVMe Ctrl (12341 ): 14324 I/Os completed (+3780) 00:09:04.190 00:09:05.128 QEMU NVMe Ctrl (12340 ): 18184 I/Os completed (+3860) 00:09:05.128 QEMU NVMe Ctrl (12341 ): 18184 I/Os completed (+3860) 00:09:05.128 00:09:05.696 10:40:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:05.696 10:40:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:05.696 10:40:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:05.696 [2024-10-08 10:40:26.238897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:09:05.696 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:05.696 [2024-10-08 10:40:26.239739] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:05.696 [2024-10-08 10:40:26.239772] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:05.696 [2024-10-08 10:40:26.239789] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:05.696 [2024-10-08 10:40:26.239812] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:05.696 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:05.696 [2024-10-08 10:40:26.241053] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:05.696 [2024-10-08 10:40:26.241141] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:05.696 [2024-10-08 10:40:26.241170] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:05.696 [2024-10-08 10:40:26.241228] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:05.696 10:40:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:05.696 10:40:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:05.696 [2024-10-08 10:40:26.266236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:09:05.696 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:05.696 [2024-10-08 10:40:26.267249] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:05.696 [2024-10-08 10:40:26.267433] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:05.696 [2024-10-08 10:40:26.267524] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:05.696 [2024-10-08 10:40:26.267560] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:05.696 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:05.696 [2024-10-08 10:40:26.269017] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:05.696 [2024-10-08 10:40:26.269121] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:05.696 [2024-10-08 10:40:26.269161] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:05.696 [2024-10-08 10:40:26.269193] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:05.958 10:40:26 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:05.958 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:05.958 EAL: Scan for (pci) bus failed. 00:09:05.958 10:40:26 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:05.958 10:40:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:05.958 10:40:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:05.958 10:40:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:05.958 00:09:05.958 10:40:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:05.958 10:40:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:05.958 10:40:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:05.958 10:40:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:05.958 10:40:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:05.958 Attaching to 0000:00:10.0 00:09:05.958 Attached to 0000:00:10.0 00:09:05.958 10:40:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:06.220 10:40:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:06.220 10:40:26 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:06.220 Attaching to 0000:00:11.0 00:09:06.220 Attached to 0000:00:11.0 00:09:07.164 QEMU NVMe Ctrl (12340 ): 3496 I/Os completed (+3496) 00:09:07.164 QEMU NVMe Ctrl (12341 ): 3208 I/Os completed (+3208) 00:09:07.164 00:09:08.106 QEMU NVMe Ctrl (12340 ): 7271 I/Os completed (+3775) 00:09:08.106 QEMU NVMe Ctrl (12341 ): 6977 I/Os completed (+3769) 00:09:08.106 00:09:09.084 QEMU NVMe Ctrl (12340 ): 11035 I/Os completed (+3764) 00:09:09.084 QEMU NVMe Ctrl (12341 ): 10741 I/Os completed (+3764) 00:09:09.084 00:09:10.018 QEMU NVMe Ctrl (12340 ): 15205 I/Os completed (+4170) 00:09:10.018 QEMU NVMe Ctrl (12341 ): 14915 I/Os completed (+4174) 00:09:10.018 00:09:10.952 QEMU NVMe Ctrl (12340 ): 19462 I/Os completed (+4257) 00:09:10.953 QEMU NVMe Ctrl (12341 ): 19163 I/Os completed (+4248) 00:09:10.953 00:09:11.888 QEMU NVMe Ctrl (12340 ): 23518 I/Os completed (+4056) 00:09:11.888 QEMU NVMe Ctrl (12341 ): 23304 I/Os completed (+4141) 00:09:11.888 00:09:13.272 QEMU NVMe Ctrl (12340 ): 27262 I/Os completed (+3744) 00:09:13.272 QEMU NVMe Ctrl (12341 ): 27056 I/Os completed (+3752) 00:09:13.272 00:09:13.844 QEMU NVMe Ctrl (12340 ): 30926 I/Os completed (+3664) 00:09:13.844 QEMU NVMe Ctrl (12341 ): 30726 I/Os completed (+3670) 00:09:13.844 00:09:15.226 QEMU NVMe Ctrl (12340 ): 34566 I/Os completed (+3640) 00:09:15.226 QEMU NVMe Ctrl (12341 ): 34366 I/Os completed (+3640) 00:09:15.226 00:09:16.160 QEMU NVMe Ctrl (12340 ): 38711 I/Os completed (+4145) 00:09:16.160 QEMU NVMe Ctrl (12341 ): 38522 I/Os completed (+4156) 00:09:16.160 00:09:17.093 QEMU NVMe Ctrl (12340 ): 42881 I/Os completed (+4170) 00:09:17.093 QEMU NVMe Ctrl (12341 ): 42700 I/Os completed (+4178) 00:09:17.093 00:09:18.028 QEMU NVMe Ctrl (12340 ): 47039 I/Os completed (+4158) 00:09:18.028 QEMU NVMe Ctrl (12341 ): 46894 I/Os completed (+4194) 00:09:18.028 00:09:18.028 10:40:38 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:18.028 10:40:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:18.028 10:40:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:18.028 10:40:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:18.028 [2024-10-08 10:40:38.552530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:09:18.028 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:18.028 [2024-10-08 10:40:38.554035] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:18.028 [2024-10-08 10:40:38.554145] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:18.028 [2024-10-08 10:40:38.554181] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:18.028 [2024-10-08 10:40:38.554252] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:18.028 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:18.029 [2024-10-08 10:40:38.555528] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:18.029 [2024-10-08 10:40:38.555558] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:18.029 [2024-10-08 10:40:38.555573] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:18.029 [2024-10-08 10:40:38.555585] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:18.029 10:40:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:18.029 10:40:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:18.029 [2024-10-08 10:40:38.575385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:09:18.029 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:18.029 [2024-10-08 10:40:38.576445] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:18.029 [2024-10-08 10:40:38.576486] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:18.029 [2024-10-08 10:40:38.576502] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:18.029 [2024-10-08 10:40:38.576526] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:18.029 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:18.029 [2024-10-08 10:40:38.577781] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:18.029 [2024-10-08 10:40:38.577831] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:18.029 [2024-10-08 10:40:38.577847] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:18.029 [2024-10-08 10:40:38.577861] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:18.029 10:40:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:18.029 10:40:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:18.289 10:40:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:18.289 10:40:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:18.289 10:40:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:18.289 10:40:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:18.289 10:40:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:18.289 10:40:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:18.289 10:40:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:18.289 10:40:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:18.289 Attaching to 0000:00:10.0 00:09:18.289 Attached to 0000:00:10.0 00:09:18.289 10:40:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:18.289 10:40:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:18.289 10:40:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:18.289 Attaching to 0000:00:11.0 00:09:18.289 Attached to 0000:00:11.0 00:09:18.862 QEMU NVMe Ctrl (12340 ): 2384 I/Os completed (+2384) 00:09:18.862 QEMU NVMe Ctrl (12341 ): 2059 I/Os completed (+2059) 00:09:18.862 00:09:20.269 QEMU NVMe Ctrl (12340 ): 6330 I/Os completed (+3946) 00:09:20.269 QEMU NVMe Ctrl (12341 ): 6008 I/Os completed (+3949) 00:09:20.269 00:09:20.844 QEMU NVMe Ctrl (12340 ): 10475 I/Os completed (+4145) 00:09:20.844 QEMU NVMe Ctrl (12341 ): 10194 I/Os completed (+4186) 00:09:20.844 00:09:22.228 QEMU NVMe Ctrl (12340 ): 14171 I/Os completed (+3696) 00:09:22.228 QEMU NVMe Ctrl (12341 ): 13902 I/Os completed (+3708) 00:09:22.228 00:09:23.171 QEMU NVMe Ctrl (12340 ): 17911 I/Os completed (+3740) 00:09:23.171 QEMU NVMe Ctrl (12341 ): 17640 I/Os completed (+3738) 00:09:23.171 00:09:24.116 QEMU NVMe Ctrl (12340 ): 21575 I/Os completed (+3664) 00:09:24.116 QEMU NVMe Ctrl (12341 ): 21362 I/Os completed (+3722) 00:09:24.116 00:09:25.058 QEMU NVMe Ctrl (12340 ): 25351 I/Os completed (+3776) 00:09:25.058 QEMU NVMe Ctrl (12341 ): 25156 I/Os completed (+3794) 00:09:25.058 00:09:25.999 QEMU NVMe Ctrl (12340 ): 29033 I/Os completed (+3682) 00:09:25.999 QEMU NVMe Ctrl (12341 ): 28866 I/Os completed (+3710) 00:09:25.999 00:09:26.940 QEMU NVMe Ctrl (12340 ): 32784 I/Os completed (+3751) 00:09:26.940 QEMU NVMe Ctrl (12341 ): 32629 I/Os completed (+3763) 00:09:26.940 00:09:27.893 QEMU NVMe Ctrl (12340 ): 36552 I/Os completed (+3768) 00:09:27.893 QEMU NVMe Ctrl (12341 ): 36405 I/Os completed (+3776) 00:09:27.893 00:09:29.278 QEMU NVMe Ctrl (12340 ): 40524 I/Os completed (+3972) 00:09:29.278 QEMU NVMe Ctrl (12341 ): 40403 I/Os completed (+3998) 00:09:29.278 00:09:29.850 QEMU NVMe Ctrl (12340 ): 44192 I/Os completed (+3668) 00:09:29.850 QEMU NVMe Ctrl (12341 ): 44097 I/Os completed (+3694) 00:09:29.850 00:09:30.424 10:40:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:30.424 10:40:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:30.424 10:40:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:30.424 10:40:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:30.424 [2024-10-08 10:40:50.864423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:09:30.424 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:30.424 [2024-10-08 10:40:50.867699] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.424 [2024-10-08 10:40:50.867757] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.424 [2024-10-08 10:40:50.867785] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.424 [2024-10-08 10:40:50.867840] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.424 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:30.424 [2024-10-08 10:40:50.869104] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.424 [2024-10-08 10:40:50.869141] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.424 [2024-10-08 10:40:50.869155] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.424 [2024-10-08 10:40:50.869167] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.424 PCI_BUS: Cannot open sysfs resource 00:09:30.424 PCI_BUS: pci_scan_one(): cannot parse resource 00:09:30.424 EAL: Scan for (pci) bus failed. 00:09:30.424 10:40:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:30.424 10:40:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:30.424 [2024-10-08 10:40:50.885477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:09:30.424 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:30.424 [2024-10-08 10:40:50.886590] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.424 [2024-10-08 10:40:50.886703] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.424 [2024-10-08 10:40:50.886723] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.424 [2024-10-08 10:40:50.886739] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.424 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:30.424 [2024-10-08 10:40:50.887945] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.424 [2024-10-08 10:40:50.887987] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.424 [2024-10-08 10:40:50.888000] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.424 [2024-10-08 10:40:50.888014] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:30.424 10:40:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:30.424 10:40:50 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:30.685 10:40:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:30.685 10:40:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:30.685 10:40:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:30.685 10:40:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:30.685 10:40:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:30.685 10:40:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:30.685 10:40:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:30.685 10:40:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:30.685 Attaching to 0000:00:10.0 00:09:30.685 Attached to 0000:00:10.0 00:09:30.685 10:40:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:30.685 10:40:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:30.685 10:40:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:30.685 Attaching to 0000:00:11.0 00:09:30.685 Attached to 0000:00:11.0 00:09:30.685 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:30.685 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:30.685 [2024-10-08 10:40:51.187834] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:09:42.949 10:41:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:42.949 10:41:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:42.949 10:41:03 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.95 00:09:42.949 10:41:03 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.95 00:09:42.949 10:41:03 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:09:42.949 10:41:03 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.95 00:09:42.949 10:41:03 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.95 2 00:09:42.949 remove_attach_helper took 42.95s to complete (handling 2 nvme drive(s)) 10:41:03 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:09:49.533 10:41:09 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 79814 00:09:49.533 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (79814) - No such process 00:09:49.533 10:41:09 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 79814 00:09:49.533 10:41:09 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:09:49.533 10:41:09 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:09:49.533 10:41:09 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:09:49.533 10:41:09 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=80364 00:09:49.533 10:41:09 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:49.533 10:41:09 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:09:49.533 10:41:09 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 80364 00:09:49.533 10:41:09 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 80364 ']' 00:09:49.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.533 10:41:09 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.533 10:41:09 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:49.533 10:41:09 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.533 10:41:09 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:49.533 10:41:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:49.533 [2024-10-08 10:41:09.280867] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:09:49.533 [2024-10-08 10:41:09.281022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80364 ] 00:09:49.533 [2024-10-08 10:41:09.414370] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:49.533 [2024-10-08 10:41:09.435415] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.533 [2024-10-08 10:41:09.485544] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.795 10:41:10 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:49.795 10:41:10 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:09:49.795 10:41:10 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:09:49.795 10:41:10 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.795 10:41:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:49.795 10:41:10 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.795 10:41:10 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:09:49.795 10:41:10 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:49.795 10:41:10 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:09:49.795 10:41:10 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:09:49.795 10:41:10 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:09:49.795 10:41:10 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:09:49.795 10:41:10 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:09:49.795 10:41:10 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:09:49.795 10:41:10 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:49.795 10:41:10 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:49.795 10:41:10 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:09:49.795 10:41:10 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:49.795 10:41:10 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:56.403 10:41:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:56.403 10:41:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:56.403 10:41:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:56.403 10:41:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:56.403 10:41:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:56.403 10:41:16 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:09:56.403 10:41:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:09:56.403 10:41:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:09:56.403 10:41:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:09:56.403 10:41:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:56.403 10:41:16 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.403 10:41:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:56.403 10:41:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:09:56.403 10:41:16 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.403 10:41:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:09:56.403 10:41:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:09:56.403 [2024-10-08 10:41:16.234250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:09:56.403 [2024-10-08 10:41:16.235356] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:56.403 [2024-10-08 10:41:16.235392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:56.403 [2024-10-08 10:41:16.235404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:56.403 [2024-10-08 10:41:16.235418] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:56.403 [2024-10-08 10:41:16.235426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:56.403 [2024-10-08 10:41:16.235438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:56.403 [2024-10-08 10:41:16.235445] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:56.403 [2024-10-08 10:41:16.235454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:56.403 [2024-10-08 10:41:16.235462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:56.403 [2024-10-08 10:41:16.235470] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:56.403 [2024-10-08 10:41:16.235477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:56.403 [2024-10-08 10:41:16.235486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:56.403 [2024-10-08 10:41:16.634245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:09:56.403 [2024-10-08 10:41:16.635452] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:56.403 [2024-10-08 10:41:16.635481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:56.403 [2024-10-08 10:41:16.635492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:56.403 [2024-10-08 10:41:16.635501] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:56.403 [2024-10-08 10:41:16.635510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:56.403 [2024-10-08 10:41:16.635518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:56.403 [2024-10-08 10:41:16.635526] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:56.403 [2024-10-08 10:41:16.635533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:56.403 [2024-10-08 10:41:16.635544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:56.403 [2024-10-08 10:41:16.635550] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:56.403 [2024-10-08 10:41:16.635559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:56.403 [2024-10-08 10:41:16.635566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:56.403 10:41:16 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:09:56.403 10:41:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:09:56.403 10:41:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:09:56.403 10:41:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:09:56.403 10:41:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:09:56.403 10:41:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:56.403 10:41:16 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.403 10:41:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:56.403 10:41:16 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.403 10:41:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:09:56.403 10:41:16 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:56.403 10:41:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:56.403 10:41:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:56.403 10:41:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:56.403 10:41:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:56.403 10:41:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:56.403 10:41:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:56.403 10:41:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:56.403 10:41:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:56.403 10:41:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:56.661 10:41:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:56.661 10:41:16 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:08.857 10:41:28 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:08.857 10:41:28 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:08.857 10:41:28 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:08.857 10:41:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:08.857 10:41:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:08.857 10:41:28 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.857 10:41:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:08.857 10:41:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:08.857 10:41:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.857 10:41:29 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:08.857 10:41:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:08.857 10:41:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:08.857 10:41:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:08.857 [2024-10-08 10:41:29.034426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:08.857 [2024-10-08 10:41:29.035787] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.857 [2024-10-08 10:41:29.035904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:08.857 [2024-10-08 10:41:29.035969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:08.857 [2024-10-08 10:41:29.036001] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.857 [2024-10-08 10:41:29.036019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:08.857 [2024-10-08 10:41:29.036143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:08.857 [2024-10-08 10:41:29.036169] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.857 [2024-10-08 10:41:29.036188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:08.857 [2024-10-08 10:41:29.036212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:08.857 [2024-10-08 10:41:29.036238] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:08.857 [2024-10-08 10:41:29.036288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:08.857 [2024-10-08 10:41:29.036319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:08.857 10:41:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:08.857 10:41:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:08.857 10:41:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:08.857 10:41:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:08.857 10:41:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:08.857 10:41:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:08.857 10:41:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:08.857 10:41:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:08.857 10:41:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.857 10:41:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:08.857 10:41:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.857 10:41:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:08.857 10:41:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:09.115 [2024-10-08 10:41:29.534422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:09.115 [2024-10-08 10:41:29.535457] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.115 [2024-10-08 10:41:29.535488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:09.115 [2024-10-08 10:41:29.535500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.115 [2024-10-08 10:41:29.535510] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.115 [2024-10-08 10:41:29.535519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:09.115 [2024-10-08 10:41:29.535528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.115 [2024-10-08 10:41:29.535536] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.115 [2024-10-08 10:41:29.535543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:09.115 [2024-10-08 10:41:29.535552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.115 [2024-10-08 10:41:29.535558] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:09.115 [2024-10-08 10:41:29.535567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:09.115 [2024-10-08 10:41:29.535574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:09.115 10:41:29 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:09.115 10:41:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:09.115 10:41:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:09.115 10:41:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:09.115 10:41:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:09.115 10:41:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:09.115 10:41:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.115 10:41:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:09.115 10:41:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.115 10:41:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:09.115 10:41:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:09.373 10:41:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:09.373 10:41:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:09.373 10:41:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:09.373 10:41:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:09.373 10:41:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:09.373 10:41:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:09.373 10:41:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:09.373 10:41:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:09.373 10:41:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:09.373 10:41:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:09.373 10:41:29 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:21.608 10:41:41 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:21.608 10:41:41 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:21.608 10:41:41 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:21.608 10:41:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:21.608 10:41:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:21.608 10:41:41 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.608 10:41:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:21.608 10:41:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:21.608 10:41:41 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.608 10:41:41 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:21.608 10:41:41 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:21.608 10:41:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:21.608 10:41:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:21.608 [2024-10-08 10:41:41.934642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:21.608 [2024-10-08 10:41:41.935991] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.608 [2024-10-08 10:41:41.936099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:21.608 [2024-10-08 10:41:41.936164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:21.608 [2024-10-08 10:41:41.936270] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.608 [2024-10-08 10:41:41.936289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:21.608 [2024-10-08 10:41:41.936315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:21.608 [2024-10-08 10:41:41.936339] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.608 [2024-10-08 10:41:41.936437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:21.608 [2024-10-08 10:41:41.936464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:21.608 [2024-10-08 10:41:41.936490] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.608 [2024-10-08 10:41:41.936533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:21.608 [2024-10-08 10:41:41.936562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:21.608 10:41:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:21.608 10:41:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:21.608 10:41:41 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:21.608 10:41:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:21.608 10:41:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:21.608 10:41:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:21.608 10:41:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:21.608 10:41:41 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.608 10:41:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:21.608 10:41:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:21.608 10:41:41 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.608 10:41:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:21.608 10:41:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:21.866 [2024-10-08 10:41:42.334642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:21.866 [2024-10-08 10:41:42.335717] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.866 [2024-10-08 10:41:42.335749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:21.866 [2024-10-08 10:41:42.335761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:21.866 [2024-10-08 10:41:42.335773] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.866 [2024-10-08 10:41:42.335783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:21.866 [2024-10-08 10:41:42.335808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:21.866 [2024-10-08 10:41:42.335817] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.866 [2024-10-08 10:41:42.335825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:21.866 [2024-10-08 10:41:42.335834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:21.866 [2024-10-08 10:41:42.335840] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:21.866 [2024-10-08 10:41:42.335849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:21.866 [2024-10-08 10:41:42.335856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:22.124 10:41:42 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:22.124 10:41:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:22.124 10:41:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:22.124 10:41:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:22.124 10:41:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:22.124 10:41:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:22.124 10:41:42 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.124 10:41:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:22.124 10:41:42 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.124 10:41:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:22.124 10:41:42 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:22.124 10:41:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:22.124 10:41:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:22.124 10:41:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:22.124 10:41:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:22.124 10:41:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:22.124 10:41:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:22.124 10:41:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:22.124 10:41:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:22.382 10:41:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:22.382 10:41:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:22.382 10:41:42 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:34.601 10:41:54 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:34.601 10:41:54 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:34.601 10:41:54 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:34.601 10:41:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:34.601 10:41:54 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.601 10:41:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:34.601 10:41:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:34.601 10:41:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:34.601 10:41:54 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.601 10:41:54 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:34.601 10:41:54 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:34.601 10:41:54 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.67 00:10:34.601 10:41:54 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.67 00:10:34.601 10:41:54 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:10:34.601 10:41:54 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.67 00:10:34.601 10:41:54 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.67 2 00:10:34.601 remove_attach_helper took 44.67s to complete (handling 2 nvme drive(s)) 10:41:54 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:10:34.601 10:41:54 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.601 10:41:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:34.601 10:41:54 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.601 10:41:54 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:34.601 10:41:54 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.601 10:41:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:34.601 10:41:54 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.601 10:41:54 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:10:34.601 10:41:54 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:34.601 10:41:54 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:34.601 10:41:54 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:34.601 10:41:54 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:34.601 10:41:54 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:34.601 10:41:54 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:34.601 10:41:54 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:10:34.601 10:41:54 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:34.601 10:41:54 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:34.601 10:41:54 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:34.601 10:41:54 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:34.601 10:41:54 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:41.173 10:42:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:41.173 10:42:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:41.173 10:42:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:41.173 10:42:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:41.173 10:42:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:41.173 10:42:00 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:41.173 10:42:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:41.173 10:42:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:41.173 10:42:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:41.173 10:42:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:41.173 10:42:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:41.173 10:42:00 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.173 10:42:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:41.173 10:42:00 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.173 10:42:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:41.173 10:42:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:41.173 [2024-10-08 10:42:00.930104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:41.173 [2024-10-08 10:42:00.930910] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.173 [2024-10-08 10:42:00.930937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:41.173 [2024-10-08 10:42:00.930948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:41.173 [2024-10-08 10:42:00.930962] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.173 [2024-10-08 10:42:00.930970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:41.173 [2024-10-08 10:42:00.930979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:41.173 [2024-10-08 10:42:00.930986] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.173 [2024-10-08 10:42:00.930996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:41.173 [2024-10-08 10:42:00.931004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:41.173 [2024-10-08 10:42:00.931012] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.173 [2024-10-08 10:42:00.931019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:41.173 [2024-10-08 10:42:00.931028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:41.173 [2024-10-08 10:42:01.330104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:41.173 [2024-10-08 10:42:01.330882] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.173 [2024-10-08 10:42:01.330910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:41.173 [2024-10-08 10:42:01.330922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:41.173 [2024-10-08 10:42:01.330932] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.173 [2024-10-08 10:42:01.330941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:41.173 [2024-10-08 10:42:01.330949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:41.173 [2024-10-08 10:42:01.330958] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.173 [2024-10-08 10:42:01.330966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:41.173 [2024-10-08 10:42:01.330974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:41.173 [2024-10-08 10:42:01.330981] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:41.173 [2024-10-08 10:42:01.330992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:41.173 [2024-10-08 10:42:01.330999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:41.173 10:42:01 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:41.173 10:42:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:41.173 10:42:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:41.173 10:42:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:41.173 10:42:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:41.173 10:42:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:41.173 10:42:01 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.173 10:42:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:41.173 10:42:01 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.173 10:42:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:41.173 10:42:01 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:41.173 10:42:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:41.173 10:42:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:41.174 10:42:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:41.174 10:42:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:41.174 10:42:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:41.174 10:42:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:41.174 10:42:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:41.174 10:42:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:41.174 10:42:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:41.174 10:42:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:41.174 10:42:01 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:53.396 10:42:13 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:53.396 10:42:13 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:53.396 10:42:13 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:53.396 10:42:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:53.396 10:42:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:53.396 10:42:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:53.396 10:42:13 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.396 10:42:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:53.396 10:42:13 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.396 10:42:13 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:53.396 10:42:13 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:53.396 10:42:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:53.396 10:42:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:53.396 [2024-10-08 10:42:13.730317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:53.396 [2024-10-08 10:42:13.731213] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.396 [2024-10-08 10:42:13.731237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.396 [2024-10-08 10:42:13.731248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.396 [2024-10-08 10:42:13.731261] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.396 [2024-10-08 10:42:13.731269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.396 [2024-10-08 10:42:13.731278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.396 [2024-10-08 10:42:13.731285] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.396 [2024-10-08 10:42:13.731295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.396 [2024-10-08 10:42:13.731302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.396 [2024-10-08 10:42:13.731310] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.396 [2024-10-08 10:42:13.731318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.396 [2024-10-08 10:42:13.731326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.396 10:42:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:53.396 10:42:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:53.396 10:42:13 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:53.396 10:42:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:53.396 10:42:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:53.396 10:42:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:53.396 10:42:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:53.396 10:42:13 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.396 10:42:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:53.396 10:42:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:53.396 10:42:13 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.396 10:42:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:53.396 10:42:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:53.654 [2024-10-08 10:42:14.130315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:53.654 [2024-10-08 10:42:14.131202] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.654 [2024-10-08 10:42:14.131231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.654 [2024-10-08 10:42:14.131243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.654 [2024-10-08 10:42:14.131254] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.654 [2024-10-08 10:42:14.131264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.654 [2024-10-08 10:42:14.131272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.654 [2024-10-08 10:42:14.131281] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.654 [2024-10-08 10:42:14.131289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.654 [2024-10-08 10:42:14.131298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.654 [2024-10-08 10:42:14.131305] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.654 [2024-10-08 10:42:14.131314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.654 [2024-10-08 10:42:14.131322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.911 10:42:14 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:53.911 10:42:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:53.911 10:42:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:53.911 10:42:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:53.911 10:42:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:53.912 10:42:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:53.912 10:42:14 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.912 10:42:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:53.912 10:42:14 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.912 10:42:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:53.912 10:42:14 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:53.912 10:42:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:53.912 10:42:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:53.912 10:42:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:53.912 10:42:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:53.912 10:42:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:53.912 10:42:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:53.912 10:42:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:53.912 10:42:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:54.170 10:42:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:54.170 10:42:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:54.170 10:42:14 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:06.376 10:42:26 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:06.376 10:42:26 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:06.376 10:42:26 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:06.376 10:42:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:06.376 10:42:26 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.376 10:42:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:06.376 10:42:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:06.376 10:42:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:06.376 10:42:26 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.376 10:42:26 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:06.376 10:42:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:06.376 10:42:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:06.376 10:42:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:06.376 10:42:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:06.376 10:42:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:06.376 [2024-10-08 10:42:26.630508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:06.376 [2024-10-08 10:42:26.631459] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.376 [2024-10-08 10:42:26.631555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.376 [2024-10-08 10:42:26.631616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.376 [2024-10-08 10:42:26.631706] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.376 [2024-10-08 10:42:26.631726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.376 [2024-10-08 10:42:26.631752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.377 [2024-10-08 10:42:26.631777] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.377 [2024-10-08 10:42:26.631842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.377 [2024-10-08 10:42:26.631871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.377 [2024-10-08 10:42:26.631897] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.377 [2024-10-08 10:42:26.631913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.377 [2024-10-08 10:42:26.631978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.377 10:42:26 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:06.377 10:42:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:06.377 10:42:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:06.377 10:42:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:06.377 10:42:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:06.377 10:42:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:06.377 10:42:26 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.377 10:42:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:06.377 10:42:26 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.377 10:42:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:06.377 10:42:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:06.635 [2024-10-08 10:42:27.030506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:06.635 [2024-10-08 10:42:27.031309] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.635 [2024-10-08 10:42:27.031338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.635 [2024-10-08 10:42:27.031351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.635 [2024-10-08 10:42:27.031361] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.635 [2024-10-08 10:42:27.031371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.635 [2024-10-08 10:42:27.031379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.635 [2024-10-08 10:42:27.031390] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.635 [2024-10-08 10:42:27.031397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.635 [2024-10-08 10:42:27.031406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.635 [2024-10-08 10:42:27.031413] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.635 [2024-10-08 10:42:27.031421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.635 [2024-10-08 10:42:27.031428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.635 10:42:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:06.635 10:42:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:06.635 10:42:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:06.635 10:42:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:06.635 10:42:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:06.635 10:42:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:06.635 10:42:27 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.635 10:42:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:06.635 10:42:27 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.635 10:42:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:06.635 10:42:27 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:06.893 10:42:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:06.893 10:42:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:06.893 10:42:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:06.894 10:42:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:06.894 10:42:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:06.894 10:42:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:06.894 10:42:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:06.894 10:42:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:06.894 10:42:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:06.894 10:42:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:06.894 10:42:27 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:19.096 10:42:39 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:19.096 10:42:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:19.096 10:42:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:19.096 10:42:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:19.096 10:42:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:19.096 10:42:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:19.096 10:42:39 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.096 10:42:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:19.096 10:42:39 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.096 10:42:39 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:19.096 10:42:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:19.096 10:42:39 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.64 00:11:19.096 10:42:39 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.64 00:11:19.096 10:42:39 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:19.096 10:42:39 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.64 00:11:19.096 10:42:39 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.64 2 00:11:19.096 remove_attach_helper took 44.64s to complete (handling 2 nvme drive(s)) 10:42:39 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:11:19.096 10:42:39 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 80364 00:11:19.096 10:42:39 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 80364 ']' 00:11:19.096 10:42:39 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 80364 00:11:19.096 10:42:39 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:11:19.096 10:42:39 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:19.096 10:42:39 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80364 00:11:19.096 10:42:39 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:19.096 10:42:39 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:19.096 10:42:39 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80364' 00:11:19.096 killing process with pid 80364 00:11:19.096 10:42:39 sw_hotplug -- common/autotest_common.sh@969 -- # kill 80364 00:11:19.096 10:42:39 sw_hotplug -- common/autotest_common.sh@974 -- # wait 80364 00:11:19.356 10:42:39 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:19.618 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:20.190 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:20.190 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:20.190 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:20.190 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:20.190 00:11:20.190 real 2m27.986s 00:11:20.190 user 1m47.171s 00:11:20.190 sys 0m19.297s 00:11:20.190 10:42:40 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:20.190 ************************************ 00:11:20.190 10:42:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:20.190 END TEST sw_hotplug 00:11:20.190 ************************************ 00:11:20.190 10:42:40 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:11:20.190 10:42:40 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:20.190 10:42:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:20.190 10:42:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:20.190 10:42:40 -- common/autotest_common.sh@10 -- # set +x 00:11:20.190 ************************************ 00:11:20.190 START TEST nvme_xnvme 00:11:20.190 ************************************ 00:11:20.190 10:42:40 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:20.452 * Looking for test storage... 00:11:20.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:20.452 10:42:40 nvme_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:20.452 10:42:40 nvme_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:11:20.452 10:42:40 nvme_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:20.452 10:42:40 nvme_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:20.452 10:42:40 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:20.452 10:42:40 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:20.452 10:42:40 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:20.452 10:42:40 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:20.452 10:42:40 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:20.452 10:42:40 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:20.452 10:42:40 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:20.452 10:42:40 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:20.452 10:42:40 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:20.452 10:42:40 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:20.452 10:42:40 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:20.452 10:42:40 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:20.452 10:42:40 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:11:20.452 10:42:40 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:20.452 10:42:40 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:20.452 10:42:40 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:20.452 10:42:40 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:20.452 10:42:40 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:20.452 10:42:40 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:20.452 10:42:40 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:20.452 10:42:40 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:20.452 10:42:40 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:20.452 10:42:40 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:20.452 10:42:40 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:20.452 10:42:40 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:20.452 10:42:40 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:20.452 10:42:40 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:20.452 10:42:40 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:11:20.452 10:42:40 nvme_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:20.452 10:42:40 nvme_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:20.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.452 --rc genhtml_branch_coverage=1 00:11:20.452 --rc genhtml_function_coverage=1 00:11:20.452 --rc genhtml_legend=1 00:11:20.452 --rc geninfo_all_blocks=1 00:11:20.452 --rc geninfo_unexecuted_blocks=1 00:11:20.452 00:11:20.452 ' 00:11:20.452 10:42:40 nvme_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:20.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.452 --rc genhtml_branch_coverage=1 00:11:20.452 --rc genhtml_function_coverage=1 00:11:20.452 --rc genhtml_legend=1 00:11:20.452 --rc geninfo_all_blocks=1 00:11:20.452 --rc geninfo_unexecuted_blocks=1 00:11:20.452 00:11:20.452 ' 00:11:20.452 10:42:40 nvme_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:20.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.452 --rc genhtml_branch_coverage=1 00:11:20.452 --rc genhtml_function_coverage=1 00:11:20.452 --rc genhtml_legend=1 00:11:20.452 --rc geninfo_all_blocks=1 00:11:20.452 --rc geninfo_unexecuted_blocks=1 00:11:20.452 00:11:20.452 ' 00:11:20.452 10:42:40 nvme_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:20.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.452 --rc genhtml_branch_coverage=1 00:11:20.452 --rc genhtml_function_coverage=1 00:11:20.452 --rc genhtml_legend=1 00:11:20.452 --rc geninfo_all_blocks=1 00:11:20.452 --rc geninfo_unexecuted_blocks=1 00:11:20.452 00:11:20.452 ' 00:11:20.452 10:42:40 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:20.452 10:42:40 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:11:20.452 10:42:40 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.452 10:42:40 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.452 10:42:40 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.452 10:42:40 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.452 10:42:40 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.452 10:42:40 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.452 10:42:40 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:11:20.452 10:42:40 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.452 10:42:40 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:11:20.452 10:42:40 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:20.452 10:42:40 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:20.452 10:42:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:20.452 ************************************ 00:11:20.452 START TEST xnvme_to_malloc_dd_copy 00:11:20.452 ************************************ 00:11:20.452 10:42:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:11:20.452 10:42:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:11:20.452 10:42:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:11:20.452 10:42:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:11:20.452 10:42:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:11:20.452 10:42:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:11:20.452 10:42:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:11:20.452 10:42:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:20.452 10:42:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:11:20.452 10:42:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:11:20.452 10:42:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:11:20.452 10:42:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:11:20.453 10:42:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:11:20.453 10:42:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:11:20.453 10:42:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:11:20.453 10:42:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:11:20.453 10:42:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:11:20.453 10:42:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:20.453 10:42:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:20.453 10:42:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:20.453 10:42:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:20.453 10:42:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:20.453 10:42:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:20.453 10:42:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:20.453 10:42:40 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:20.453 { 00:11:20.453 "subsystems": [ 00:11:20.453 { 00:11:20.453 "subsystem": "bdev", 00:11:20.453 "config": [ 00:11:20.453 { 00:11:20.453 "params": { 00:11:20.453 "block_size": 512, 00:11:20.453 "num_blocks": 2097152, 00:11:20.453 "name": "malloc0" 00:11:20.453 }, 00:11:20.453 "method": "bdev_malloc_create" 00:11:20.453 }, 00:11:20.453 { 00:11:20.453 "params": { 00:11:20.453 "io_mechanism": "libaio", 00:11:20.453 "filename": "/dev/nullb0", 00:11:20.453 "name": "null0" 00:11:20.453 }, 00:11:20.453 "method": "bdev_xnvme_create" 00:11:20.453 }, 00:11:20.453 { 00:11:20.453 "method": "bdev_wait_for_examine" 00:11:20.453 } 00:11:20.453 ] 00:11:20.453 } 00:11:20.453 ] 00:11:20.453 } 00:11:20.453 [2024-10-08 10:42:41.026230] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:11:20.453 [2024-10-08 10:42:41.026417] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81717 ] 00:11:20.714 [2024-10-08 10:42:41.158936] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:20.714 [2024-10-08 10:42:41.180876] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.714 [2024-10-08 10:42:41.230204] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.097  [2024-10-08T10:42:43.616Z] Copying: 220/1024 [MB] (220 MBps) [2024-10-08T10:42:44.550Z] Copying: 442/1024 [MB] (221 MBps) [2024-10-08T10:42:45.926Z] Copying: 720/1024 [MB] (278 MBps) [2024-10-08T10:42:45.926Z] Copying: 1024/1024 [MB] (average 256 MBps) 00:11:25.349 00:11:25.349 10:42:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:25.349 10:42:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:25.349 10:42:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:25.349 10:42:45 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:25.349 { 00:11:25.349 "subsystems": [ 00:11:25.349 { 00:11:25.349 "subsystem": "bdev", 00:11:25.349 "config": [ 00:11:25.349 { 00:11:25.349 "params": { 00:11:25.349 "block_size": 512, 00:11:25.349 "num_blocks": 2097152, 00:11:25.349 "name": "malloc0" 00:11:25.349 }, 00:11:25.349 "method": "bdev_malloc_create" 00:11:25.349 }, 00:11:25.349 { 00:11:25.349 "params": { 00:11:25.349 "io_mechanism": "libaio", 00:11:25.349 "filename": "/dev/nullb0", 00:11:25.349 "name": "null0" 00:11:25.349 }, 00:11:25.349 "method": "bdev_xnvme_create" 00:11:25.349 }, 00:11:25.349 { 00:11:25.349 "method": "bdev_wait_for_examine" 00:11:25.349 } 00:11:25.349 ] 00:11:25.349 } 00:11:25.349 ] 00:11:25.349 } 00:11:25.349 [2024-10-08 10:42:45.915564] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:11:25.349 [2024-10-08 10:42:45.916030] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81777 ] 00:11:25.608 [2024-10-08 10:42:46.044943] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:25.608 [2024-10-08 10:42:46.058751] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.608 [2024-10-08 10:42:46.104149] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.981  [2024-10-08T10:42:48.492Z] Copying: 308/1024 [MB] (308 MBps) [2024-10-08T10:42:49.424Z] Copying: 618/1024 [MB] (309 MBps) [2024-10-08T10:42:49.682Z] Copying: 927/1024 [MB] (309 MBps) [2024-10-08T10:42:50.250Z] Copying: 1024/1024 [MB] (average 309 MBps) 00:11:29.673 00:11:29.673 10:42:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:29.673 10:42:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:11:29.673 10:42:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:29.673 10:42:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:29.673 10:42:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:29.673 10:42:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:29.673 { 00:11:29.673 "subsystems": [ 00:11:29.673 { 00:11:29.673 "subsystem": "bdev", 00:11:29.673 "config": [ 00:11:29.673 { 00:11:29.673 "params": { 00:11:29.673 "block_size": 512, 00:11:29.673 "num_blocks": 2097152, 00:11:29.673 "name": "malloc0" 00:11:29.673 }, 00:11:29.673 "method": "bdev_malloc_create" 00:11:29.673 }, 00:11:29.673 { 00:11:29.673 "params": { 00:11:29.673 "io_mechanism": "io_uring", 00:11:29.673 "filename": "/dev/nullb0", 00:11:29.673 "name": "null0" 00:11:29.673 }, 00:11:29.673 "method": "bdev_xnvme_create" 00:11:29.673 }, 00:11:29.673 { 00:11:29.673 "method": "bdev_wait_for_examine" 00:11:29.673 } 00:11:29.673 ] 00:11:29.673 } 00:11:29.673 ] 00:11:29.673 } 00:11:29.673 [2024-10-08 10:42:50.078879] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:11:29.673 [2024-10-08 10:42:50.078994] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81832 ] 00:11:29.673 [2024-10-08 10:42:50.207228] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:29.673 [2024-10-08 10:42:50.228455] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.933 [2024-10-08 10:42:50.264303] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.392  [2024-10-08T10:42:52.912Z] Copying: 230/1024 [MB] (230 MBps) [2024-10-08T10:42:53.848Z] Copying: 462/1024 [MB] (231 MBps) [2024-10-08T10:42:54.804Z] Copying: 730/1024 [MB] (268 MBps) [2024-10-08T10:42:54.805Z] Copying: 1024/1024 [MB] (average 260 MBps) 00:11:34.228 00:11:34.228 10:42:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:34.487 10:42:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:34.487 10:42:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:34.487 10:42:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:34.487 { 00:11:34.487 "subsystems": [ 00:11:34.487 { 00:11:34.487 "subsystem": "bdev", 00:11:34.487 "config": [ 00:11:34.487 { 00:11:34.487 "params": { 00:11:34.487 "block_size": 512, 00:11:34.487 "num_blocks": 2097152, 00:11:34.487 "name": "malloc0" 00:11:34.487 }, 00:11:34.487 "method": "bdev_malloc_create" 00:11:34.487 }, 00:11:34.487 { 00:11:34.487 "params": { 00:11:34.487 "io_mechanism": "io_uring", 00:11:34.487 "filename": "/dev/nullb0", 00:11:34.487 "name": "null0" 00:11:34.487 }, 00:11:34.487 "method": "bdev_xnvme_create" 00:11:34.487 }, 00:11:34.487 { 00:11:34.487 "method": "bdev_wait_for_examine" 00:11:34.487 } 00:11:34.487 ] 00:11:34.487 } 00:11:34.487 ] 00:11:34.487 } 00:11:34.487 [2024-10-08 10:42:54.864445] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:11:34.487 [2024-10-08 10:42:54.864559] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81892 ] 00:11:34.487 [2024-10-08 10:42:54.993602] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:34.487 [2024-10-08 10:42:55.014326] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.487 [2024-10-08 10:42:55.048127] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.862  [2024-10-08T10:42:57.374Z] Copying: 324/1024 [MB] (324 MBps) [2024-10-08T10:42:58.308Z] Copying: 649/1024 [MB] (324 MBps) [2024-10-08T10:42:58.567Z] Copying: 975/1024 [MB] (325 MBps) [2024-10-08T10:42:58.827Z] Copying: 1024/1024 [MB] (average 324 MBps) 00:11:38.250 00:11:38.250 10:42:58 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:11:38.250 10:42:58 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:11:38.250 ************************************ 00:11:38.250 END TEST xnvme_to_malloc_dd_copy 00:11:38.250 ************************************ 00:11:38.250 00:11:38.250 real 0m17.850s 00:11:38.250 user 0m14.697s 00:11:38.250 sys 0m2.642s 00:11:38.250 10:42:58 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:38.250 10:42:58 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:11:38.250 10:42:58 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:11:38.250 10:42:58 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:38.250 10:42:58 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:38.250 10:42:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:38.509 ************************************ 00:11:38.509 START TEST xnvme_bdevperf 00:11:38.509 ************************************ 00:11:38.509 10:42:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:11:38.509 10:42:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:11:38.509 10:42:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:11:38.509 10:42:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:11:38.509 10:42:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:11:38.509 10:42:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:11:38.509 10:42:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:38.509 10:42:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:11:38.509 10:42:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:11:38.509 10:42:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:11:38.509 10:42:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:11:38.509 10:42:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:11:38.509 10:42:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:11:38.509 10:42:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:38.509 10:42:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:38.509 10:42:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:11:38.509 10:42:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:38.509 10:42:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:11:38.509 10:42:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:11:38.509 10:42:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:11:38.509 10:42:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:11:38.509 { 00:11:38.509 "subsystems": [ 00:11:38.509 { 00:11:38.509 "subsystem": "bdev", 00:11:38.509 "config": [ 00:11:38.509 { 00:11:38.509 "params": { 00:11:38.509 "io_mechanism": "libaio", 00:11:38.509 "filename": "/dev/nullb0", 00:11:38.509 "name": "null0" 00:11:38.509 }, 00:11:38.509 "method": "bdev_xnvme_create" 00:11:38.509 }, 00:11:38.509 { 00:11:38.509 "method": "bdev_wait_for_examine" 00:11:38.509 } 00:11:38.509 ] 00:11:38.509 } 00:11:38.509 ] 00:11:38.509 } 00:11:38.509 [2024-10-08 10:42:58.914717] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:11:38.509 [2024-10-08 10:42:58.914839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81969 ] 00:11:38.509 [2024-10-08 10:42:59.044230] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:38.509 [2024-10-08 10:42:59.063563] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.768 [2024-10-08 10:42:59.105278] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.768 Running I/O for 5 seconds... 00:11:40.636 205120.00 IOPS, 801.25 MiB/s [2024-10-08T10:43:02.588Z] 205600.00 IOPS, 803.12 MiB/s [2024-10-08T10:43:03.524Z] 206144.00 IOPS, 805.25 MiB/s [2024-10-08T10:43:04.460Z] 206304.00 IOPS, 805.88 MiB/s [2024-10-08T10:43:04.460Z] 206016.00 IOPS, 804.75 MiB/s 00:11:43.883 Latency(us) 00:11:43.883 [2024-10-08T10:43:04.460Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:43.883 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:11:43.883 null0 : 5.00 205952.93 804.50 0.00 0.00 308.58 300.90 1531.27 00:11:43.883 [2024-10-08T10:43:04.460Z] =================================================================================================================== 00:11:43.883 [2024-10-08T10:43:04.460Z] Total : 205952.93 804.50 0.00 0.00 308.58 300.90 1531.27 00:11:43.883 10:43:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:11:43.883 10:43:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:11:43.883 10:43:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:11:43.883 10:43:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:11:43.883 10:43:04 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:11:43.883 10:43:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:11:43.883 { 00:11:43.883 "subsystems": [ 00:11:43.883 { 00:11:43.883 "subsystem": "bdev", 00:11:43.883 "config": [ 00:11:43.883 { 00:11:43.883 "params": { 00:11:43.883 "io_mechanism": "io_uring", 00:11:43.883 "filename": "/dev/nullb0", 00:11:43.883 "name": "null0" 00:11:43.883 }, 00:11:43.883 "method": "bdev_xnvme_create" 00:11:43.883 }, 00:11:43.883 { 00:11:43.883 "method": "bdev_wait_for_examine" 00:11:43.883 } 00:11:43.883 ] 00:11:43.883 } 00:11:43.883 ] 00:11:43.883 } 00:11:43.883 [2024-10-08 10:43:04.412870] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:11:43.883 [2024-10-08 10:43:04.412983] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82033 ] 00:11:44.143 [2024-10-08 10:43:04.541555] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:44.143 [2024-10-08 10:43:04.560224] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.143 [2024-10-08 10:43:04.598572] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.143 Running I/O for 5 seconds... 00:11:46.452 237696.00 IOPS, 928.50 MiB/s [2024-10-08T10:43:07.979Z] 237664.00 IOPS, 928.38 MiB/s [2024-10-08T10:43:08.929Z] 237546.67 IOPS, 927.92 MiB/s [2024-10-08T10:43:09.871Z] 237520.00 IOPS, 927.81 MiB/s [2024-10-08T10:43:09.871Z] 237504.00 IOPS, 927.75 MiB/s 00:11:49.294 Latency(us) 00:11:49.294 [2024-10-08T10:43:09.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:49.294 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:11:49.294 null0 : 5.00 237431.94 927.47 0.00 0.00 267.52 252.06 1487.16 00:11:49.294 [2024-10-08T10:43:09.871Z] =================================================================================================================== 00:11:49.294 [2024-10-08T10:43:09.871Z] Total : 237431.94 927.47 0.00 0.00 267.52 252.06 1487.16 00:11:49.294 10:43:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:11:49.294 10:43:09 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:11:49.294 ************************************ 00:11:49.294 END TEST xnvme_bdevperf 00:11:49.294 ************************************ 00:11:49.294 00:11:49.294 real 0m11.019s 00:11:49.294 user 0m8.535s 00:11:49.294 sys 0m2.258s 00:11:49.294 10:43:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:49.294 10:43:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:11:49.556 ************************************ 00:11:49.556 END TEST nvme_xnvme 00:11:49.556 ************************************ 00:11:49.556 00:11:49.556 real 0m29.138s 00:11:49.556 user 0m23.339s 00:11:49.556 sys 0m5.032s 00:11:49.556 10:43:09 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:49.556 10:43:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:49.556 10:43:09 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:11:49.556 10:43:09 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:49.556 10:43:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:49.556 10:43:09 -- common/autotest_common.sh@10 -- # set +x 00:11:49.556 ************************************ 00:11:49.556 START TEST blockdev_xnvme 00:11:49.556 ************************************ 00:11:49.556 10:43:09 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:11:49.556 * Looking for test storage... 00:11:49.556 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:49.556 10:43:10 blockdev_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:49.556 10:43:10 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:11:49.556 10:43:10 blockdev_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:49.556 10:43:10 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:49.556 10:43:10 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:49.556 10:43:10 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:49.556 10:43:10 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:49.556 10:43:10 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:49.556 10:43:10 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:49.556 10:43:10 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:49.556 10:43:10 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:49.556 10:43:10 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:49.556 10:43:10 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:49.556 10:43:10 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:49.556 10:43:10 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:49.556 10:43:10 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:49.556 10:43:10 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:11:49.556 10:43:10 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:49.556 10:43:10 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:49.556 10:43:10 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:49.556 10:43:10 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:49.556 10:43:10 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:49.556 10:43:10 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:49.556 10:43:10 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:49.556 10:43:10 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:49.556 10:43:10 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:49.556 10:43:10 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:49.556 10:43:10 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:49.556 10:43:10 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:49.556 10:43:10 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:49.556 10:43:10 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:49.556 10:43:10 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:11:49.556 10:43:10 blockdev_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:49.556 10:43:10 blockdev_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:49.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.556 --rc genhtml_branch_coverage=1 00:11:49.556 --rc genhtml_function_coverage=1 00:11:49.556 --rc genhtml_legend=1 00:11:49.556 --rc geninfo_all_blocks=1 00:11:49.556 --rc geninfo_unexecuted_blocks=1 00:11:49.556 00:11:49.556 ' 00:11:49.556 10:43:10 blockdev_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:49.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.556 --rc genhtml_branch_coverage=1 00:11:49.556 --rc genhtml_function_coverage=1 00:11:49.556 --rc genhtml_legend=1 00:11:49.556 --rc geninfo_all_blocks=1 00:11:49.556 --rc geninfo_unexecuted_blocks=1 00:11:49.556 00:11:49.556 ' 00:11:49.556 10:43:10 blockdev_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:49.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.556 --rc genhtml_branch_coverage=1 00:11:49.556 --rc genhtml_function_coverage=1 00:11:49.556 --rc genhtml_legend=1 00:11:49.556 --rc geninfo_all_blocks=1 00:11:49.556 --rc geninfo_unexecuted_blocks=1 00:11:49.556 00:11:49.556 ' 00:11:49.556 10:43:10 blockdev_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:49.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.556 --rc genhtml_branch_coverage=1 00:11:49.556 --rc genhtml_function_coverage=1 00:11:49.556 --rc genhtml_legend=1 00:11:49.556 --rc geninfo_all_blocks=1 00:11:49.556 --rc geninfo_unexecuted_blocks=1 00:11:49.556 00:11:49.556 ' 00:11:49.556 10:43:10 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:49.556 10:43:10 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:11:49.556 10:43:10 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:49.556 10:43:10 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:49.556 10:43:10 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:11:49.556 10:43:10 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:11:49.556 10:43:10 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:11:49.556 10:43:10 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:11:49.556 10:43:10 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:11:49.556 10:43:10 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:11:49.556 10:43:10 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:11:49.556 10:43:10 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:11:49.556 10:43:10 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:11:49.556 10:43:10 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:11:49.556 10:43:10 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:11:49.556 10:43:10 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:11:49.556 10:43:10 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:11:49.556 10:43:10 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:11:49.556 10:43:10 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:11:49.556 10:43:10 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:11:49.556 10:43:10 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:11:49.556 10:43:10 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:11:49.556 10:43:10 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:11:49.556 10:43:10 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:11:49.556 10:43:10 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=82175 00:11:49.556 10:43:10 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:49.556 10:43:10 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 82175 00:11:49.556 10:43:10 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 82175 ']' 00:11:49.556 10:43:10 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.557 10:43:10 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:49.557 10:43:10 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:49.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.557 10:43:10 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.557 10:43:10 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:49.557 10:43:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:49.818 [2024-10-08 10:43:10.185493] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:11:49.818 [2024-10-08 10:43:10.185891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82175 ] 00:11:49.818 [2024-10-08 10:43:10.318674] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:49.818 [2024-10-08 10:43:10.337857] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.818 [2024-10-08 10:43:10.389662] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.762 10:43:11 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:50.762 10:43:11 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:11:50.762 10:43:11 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:11:50.762 10:43:11 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:11:50.762 10:43:11 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:11:50.762 10:43:11 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:11:50.762 10:43:11 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:50.762 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:51.023 Waiting for block devices as requested 00:11:51.023 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:51.282 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:51.282 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:51.282 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:56.549 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:56.549 10:43:16 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:11:56.549 10:43:16 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:11:56.549 10:43:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:11:56.549 10:43:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:11:56.549 10:43:16 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:56.549 10:43:16 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:11:56.549 10:43:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:11:56.549 10:43:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:11:56.549 10:43:16 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:56.549 10:43:16 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:11:56.549 10:43:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:11:56.549 10:43:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:11:56.549 10:43:16 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:56.549 10:43:16 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:11:56.549 10:43:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:11:56.549 10:43:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:11:56.549 10:43:16 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:56.549 10:43:16 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:11:56.549 10:43:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:11:56.549 10:43:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:11:56.549 10:43:16 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:56.549 10:43:16 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:11:56.549 10:43:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:11:56.549 10:43:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:11:56.549 10:43:16 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:11:56.549 10:43:16 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:11:56.549 10:43:16 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:56.549 10:43:16 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:11:56.549 nvme0n1 00:11:56.549 nvme1n1 00:11:56.549 nvme2n1 00:11:56.549 nvme2n2 00:11:56.549 nvme2n3 00:11:56.549 nvme3n1 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.549 10:43:16 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.549 10:43:16 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:11:56.549 10:43:16 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.549 10:43:16 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.549 10:43:16 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:56.549 10:43:16 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.549 10:43:17 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:11:56.549 10:43:17 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:11:56.549 10:43:17 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:11:56.549 10:43:17 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.549 10:43:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:56.549 10:43:17 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.549 10:43:17 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:11:56.550 10:43:17 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:11:56.550 10:43:17 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "814b3132-c0f3-485d-bcc9-b90a66cafd8a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "814b3132-c0f3-485d-bcc9-b90a66cafd8a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "61545dff-6914-40cd-8be4-ba79ab2a8ed5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "61545dff-6914-40cd-8be4-ba79ab2a8ed5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "4a0d9896-8290-4997-9af4-07e79d213a68"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4a0d9896-8290-4997-9af4-07e79d213a68",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "9cc7dee8-05df-4d09-97f3-df9c0d608ebc"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9cc7dee8-05df-4d09-97f3-df9c0d608ebc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "ad310326-632f-47f6-8d1e-cc1836a45003"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ad310326-632f-47f6-8d1e-cc1836a45003",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "e5edace7-8d1b-4357-b7a5-0ca9f0798423"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "e5edace7-8d1b-4357-b7a5-0ca9f0798423",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:11:56.550 10:43:17 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:11:56.550 10:43:17 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:11:56.550 10:43:17 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:11:56.550 10:43:17 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 82175 00:11:56.550 10:43:17 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 82175 ']' 00:11:56.550 10:43:17 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 82175 00:11:56.550 10:43:17 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:11:56.550 10:43:17 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:56.550 10:43:17 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82175 00:11:56.550 killing process with pid 82175 00:11:56.550 10:43:17 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:56.550 10:43:17 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:56.550 10:43:17 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82175' 00:11:56.550 10:43:17 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 82175 00:11:56.550 10:43:17 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 82175 00:11:57.124 10:43:17 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:57.124 10:43:17 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:11:57.124 10:43:17 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:57.124 10:43:17 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:57.124 10:43:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:57.124 ************************************ 00:11:57.124 START TEST bdev_hello_world 00:11:57.124 ************************************ 00:11:57.124 10:43:17 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:11:57.124 [2024-10-08 10:43:17.568668] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:11:57.124 [2024-10-08 10:43:17.568853] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82522 ] 00:11:57.385 [2024-10-08 10:43:17.700088] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:57.385 [2024-10-08 10:43:17.718504] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.385 [2024-10-08 10:43:17.769142] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.645 [2024-10-08 10:43:17.970147] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:57.645 [2024-10-08 10:43:17.970216] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:11:57.645 [2024-10-08 10:43:17.970242] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:57.645 [2024-10-08 10:43:17.972536] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:57.645 [2024-10-08 10:43:17.973007] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:57.645 [2024-10-08 10:43:17.973045] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:57.645 [2024-10-08 10:43:17.974066] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:57.645 00:11:57.645 [2024-10-08 10:43:17.974132] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:57.645 00:11:57.645 ************************************ 00:11:57.645 END TEST bdev_hello_world 00:11:57.645 real 0m0.677s 00:11:57.645 user 0m0.347s 00:11:57.645 sys 0m0.205s 00:11:57.645 10:43:18 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:57.645 10:43:18 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:11:57.645 ************************************ 00:11:57.912 10:43:18 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:11:57.912 10:43:18 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:57.912 10:43:18 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:57.912 10:43:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:57.912 ************************************ 00:11:57.912 START TEST bdev_bounds 00:11:57.912 ************************************ 00:11:57.912 10:43:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:11:57.912 Process bdevio pid: 82548 00:11:57.912 10:43:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=82548 00:11:57.912 10:43:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:57.912 10:43:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 82548' 00:11:57.912 10:43:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 82548 00:11:57.912 10:43:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 82548 ']' 00:11:57.912 10:43:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.912 10:43:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:57.912 10:43:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.912 10:43:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:57.912 10:43:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:57.912 10:43:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:57.912 [2024-10-08 10:43:18.310975] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:11:57.912 [2024-10-08 10:43:18.311120] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82548 ] 00:11:57.912 [2024-10-08 10:43:18.443716] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:57.912 [2024-10-08 10:43:18.464100] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:58.174 [2024-10-08 10:43:18.518188] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:58.174 [2024-10-08 10:43:18.518592] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:58.174 [2024-10-08 10:43:18.518631] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.746 10:43:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:58.746 10:43:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:11:58.746 10:43:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:58.746 I/O targets: 00:11:58.746 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:11:58.746 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:11:58.746 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:58.746 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:58.746 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:11:58.746 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:11:58.746 00:11:58.746 00:11:58.746 CUnit - A unit testing framework for C - Version 2.1-3 00:11:58.746 http://cunit.sourceforge.net/ 00:11:58.746 00:11:58.746 00:11:58.746 Suite: bdevio tests on: nvme3n1 00:11:58.746 Test: blockdev write read block ...passed 00:11:58.746 Test: blockdev write zeroes read block ...passed 00:11:58.746 Test: blockdev write zeroes read no split ...passed 00:11:58.746 Test: blockdev write zeroes read split ...passed 00:11:58.746 Test: blockdev write zeroes read split partial ...passed 00:11:58.746 Test: blockdev reset ...passed 00:11:58.746 Test: blockdev write read 8 blocks ...passed 00:11:58.746 Test: blockdev write read size > 128k ...passed 00:11:58.746 Test: blockdev write read invalid size ...passed 00:11:58.746 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:58.746 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:58.746 Test: blockdev write read max offset ...passed 00:11:58.747 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:58.747 Test: blockdev writev readv 8 blocks ...passed 00:11:58.747 Test: blockdev writev readv 30 x 1block ...passed 00:11:58.747 Test: blockdev writev readv block ...passed 00:11:58.747 Test: blockdev writev readv size > 128k ...passed 00:11:58.747 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:59.006 Test: blockdev comparev and writev ...passed 00:11:59.006 Test: blockdev nvme passthru rw ...passed 00:11:59.006 Test: blockdev nvme passthru vendor specific ...passed 00:11:59.006 Test: blockdev nvme admin passthru ...passed 00:11:59.006 Test: blockdev copy ...passed 00:11:59.006 Suite: bdevio tests on: nvme2n3 00:11:59.006 Test: blockdev write read block ...passed 00:11:59.006 Test: blockdev write zeroes read block ...passed 00:11:59.006 Test: blockdev write zeroes read no split ...passed 00:11:59.007 Test: blockdev write zeroes read split ...passed 00:11:59.007 Test: blockdev write zeroes read split partial ...passed 00:11:59.007 Test: blockdev reset ...passed 00:11:59.007 Test: blockdev write read 8 blocks ...passed 00:11:59.007 Test: blockdev write read size > 128k ...passed 00:11:59.007 Test: blockdev write read invalid size ...passed 00:11:59.007 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:59.007 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:59.007 Test: blockdev write read max offset ...passed 00:11:59.007 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:59.007 Test: blockdev writev readv 8 blocks ...passed 00:11:59.007 Test: blockdev writev readv 30 x 1block ...passed 00:11:59.007 Test: blockdev writev readv block ...passed 00:11:59.007 Test: blockdev writev readv size > 128k ...passed 00:11:59.007 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:59.007 Test: blockdev comparev and writev ...passed 00:11:59.007 Test: blockdev nvme passthru rw ...passed 00:11:59.007 Test: blockdev nvme passthru vendor specific ...passed 00:11:59.007 Test: blockdev nvme admin passthru ...passed 00:11:59.007 Test: blockdev copy ...passed 00:11:59.007 Suite: bdevio tests on: nvme2n2 00:11:59.007 Test: blockdev write read block ...passed 00:11:59.007 Test: blockdev write zeroes read block ...passed 00:11:59.007 Test: blockdev write zeroes read no split ...passed 00:11:59.007 Test: blockdev write zeroes read split ...passed 00:11:59.007 Test: blockdev write zeroes read split partial ...passed 00:11:59.007 Test: blockdev reset ...passed 00:11:59.007 Test: blockdev write read 8 blocks ...passed 00:11:59.007 Test: blockdev write read size > 128k ...passed 00:11:59.007 Test: blockdev write read invalid size ...passed 00:11:59.007 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:59.007 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:59.007 Test: blockdev write read max offset ...passed 00:11:59.007 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:59.007 Test: blockdev writev readv 8 blocks ...passed 00:11:59.007 Test: blockdev writev readv 30 x 1block ...passed 00:11:59.007 Test: blockdev writev readv block ...passed 00:11:59.007 Test: blockdev writev readv size > 128k ...passed 00:11:59.007 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:59.007 Test: blockdev comparev and writev ...passed 00:11:59.007 Test: blockdev nvme passthru rw ...passed 00:11:59.007 Test: blockdev nvme passthru vendor specific ...passed 00:11:59.007 Test: blockdev nvme admin passthru ...passed 00:11:59.007 Test: blockdev copy ...passed 00:11:59.007 Suite: bdevio tests on: nvme2n1 00:11:59.007 Test: blockdev write read block ...passed 00:11:59.007 Test: blockdev write zeroes read block ...passed 00:11:59.007 Test: blockdev write zeroes read no split ...passed 00:11:59.007 Test: blockdev write zeroes read split ...passed 00:11:59.007 Test: blockdev write zeroes read split partial ...passed 00:11:59.007 Test: blockdev reset ...passed 00:11:59.007 Test: blockdev write read 8 blocks ...passed 00:11:59.007 Test: blockdev write read size > 128k ...passed 00:11:59.007 Test: blockdev write read invalid size ...passed 00:11:59.007 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:59.007 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:59.007 Test: blockdev write read max offset ...passed 00:11:59.007 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:59.007 Test: blockdev writev readv 8 blocks ...passed 00:11:59.007 Test: blockdev writev readv 30 x 1block ...passed 00:11:59.007 Test: blockdev writev readv block ...passed 00:11:59.007 Test: blockdev writev readv size > 128k ...passed 00:11:59.007 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:59.007 Test: blockdev comparev and writev ...passed 00:11:59.007 Test: blockdev nvme passthru rw ...passed 00:11:59.007 Test: blockdev nvme passthru vendor specific ...passed 00:11:59.007 Test: blockdev nvme admin passthru ...passed 00:11:59.007 Test: blockdev copy ...passed 00:11:59.007 Suite: bdevio tests on: nvme1n1 00:11:59.007 Test: blockdev write read block ...passed 00:11:59.007 Test: blockdev write zeroes read block ...passed 00:11:59.007 Test: blockdev write zeroes read no split ...passed 00:11:59.007 Test: blockdev write zeroes read split ...passed 00:11:59.007 Test: blockdev write zeroes read split partial ...passed 00:11:59.007 Test: blockdev reset ...passed 00:11:59.007 Test: blockdev write read 8 blocks ...passed 00:11:59.007 Test: blockdev write read size > 128k ...passed 00:11:59.007 Test: blockdev write read invalid size ...passed 00:11:59.007 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:59.007 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:59.007 Test: blockdev write read max offset ...passed 00:11:59.007 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:59.007 Test: blockdev writev readv 8 blocks ...passed 00:11:59.007 Test: blockdev writev readv 30 x 1block ...passed 00:11:59.007 Test: blockdev writev readv block ...passed 00:11:59.007 Test: blockdev writev readv size > 128k ...passed 00:11:59.007 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:59.007 Test: blockdev comparev and writev ...passed 00:11:59.007 Test: blockdev nvme passthru rw ...passed 00:11:59.007 Test: blockdev nvme passthru vendor specific ...passed 00:11:59.007 Test: blockdev nvme admin passthru ...passed 00:11:59.007 Test: blockdev copy ...passed 00:11:59.007 Suite: bdevio tests on: nvme0n1 00:11:59.007 Test: blockdev write read block ...passed 00:11:59.007 Test: blockdev write zeroes read block ...passed 00:11:59.007 Test: blockdev write zeroes read no split ...passed 00:11:59.007 Test: blockdev write zeroes read split ...passed 00:11:59.007 Test: blockdev write zeroes read split partial ...passed 00:11:59.007 Test: blockdev reset ...passed 00:11:59.007 Test: blockdev write read 8 blocks ...passed 00:11:59.007 Test: blockdev write read size > 128k ...passed 00:11:59.007 Test: blockdev write read invalid size ...passed 00:11:59.007 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:59.007 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:59.007 Test: blockdev write read max offset ...passed 00:11:59.007 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:59.007 Test: blockdev writev readv 8 blocks ...passed 00:11:59.007 Test: blockdev writev readv 30 x 1block ...passed 00:11:59.007 Test: blockdev writev readv block ...passed 00:11:59.007 Test: blockdev writev readv size > 128k ...passed 00:11:59.007 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:59.007 Test: blockdev comparev and writev ...passed 00:11:59.007 Test: blockdev nvme passthru rw ...passed 00:11:59.007 Test: blockdev nvme passthru vendor specific ...passed 00:11:59.007 Test: blockdev nvme admin passthru ...passed 00:11:59.007 Test: blockdev copy ...passed 00:11:59.007 00:11:59.007 Run Summary: Type Total Ran Passed Failed Inactive 00:11:59.007 suites 6 6 n/a 0 0 00:11:59.007 tests 138 138 138 0 0 00:11:59.007 asserts 780 780 780 0 n/a 00:11:59.007 00:11:59.007 Elapsed time = 0.584 seconds 00:11:59.007 0 00:11:59.007 10:43:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 82548 00:11:59.007 10:43:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 82548 ']' 00:11:59.007 10:43:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 82548 00:11:59.007 10:43:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:11:59.007 10:43:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:59.007 10:43:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82548 00:11:59.268 killing process with pid 82548 00:11:59.268 10:43:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:59.268 10:43:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:59.268 10:43:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82548' 00:11:59.268 10:43:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 82548 00:11:59.268 10:43:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 82548 00:11:59.529 10:43:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:11:59.529 00:11:59.529 real 0m1.638s 00:11:59.529 user 0m3.901s 00:11:59.529 sys 0m0.340s 00:11:59.529 10:43:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:59.529 10:43:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:59.529 ************************************ 00:11:59.529 END TEST bdev_bounds 00:11:59.529 ************************************ 00:11:59.529 10:43:19 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:11:59.529 10:43:19 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:59.529 10:43:19 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:59.529 10:43:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:59.529 ************************************ 00:11:59.529 START TEST bdev_nbd 00:11:59.529 ************************************ 00:11:59.530 10:43:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:11:59.530 10:43:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:11:59.530 10:43:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:11:59.530 10:43:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:59.530 10:43:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:59.530 10:43:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:11:59.530 10:43:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:11:59.530 10:43:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:11:59.530 10:43:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:11:59.530 10:43:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:59.530 10:43:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:11:59.530 10:43:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:11:59.530 10:43:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:11:59.530 10:43:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:11:59.530 10:43:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:11:59.530 10:43:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:11:59.530 10:43:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=82602 00:11:59.530 10:43:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:59.530 10:43:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 82602 /var/tmp/spdk-nbd.sock 00:11:59.530 10:43:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 82602 ']' 00:11:59.530 10:43:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:59.530 10:43:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:59.530 10:43:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:59.530 10:43:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:59.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:59.530 10:43:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:59.530 10:43:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:59.530 [2024-10-08 10:43:20.028973] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:11:59.530 [2024-10-08 10:43:20.029331] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.791 [2024-10-08 10:43:20.167041] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:59.791 [2024-10-08 10:43:20.180964] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.791 [2024-10-08 10:43:20.255932] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.363 10:43:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:00.363 10:43:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:12:00.363 10:43:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:00.363 10:43:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:00.363 10:43:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:00.363 10:43:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:00.363 10:43:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:00.363 10:43:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:00.363 10:43:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:00.363 10:43:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:00.363 10:43:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:00.363 10:43:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:00.363 10:43:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:00.363 10:43:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:00.363 10:43:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:12:00.624 10:43:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:00.625 10:43:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:00.625 10:43:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:00.625 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:00.625 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:00.625 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:00.625 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:00.625 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:00.625 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:00.625 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:00.625 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:00.625 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:00.625 1+0 records in 00:12:00.625 1+0 records out 00:12:00.625 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000815492 s, 5.0 MB/s 00:12:00.625 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:00.625 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:00.625 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:00.625 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:00.625 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:00.625 10:43:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:00.625 10:43:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:00.625 10:43:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:12:00.886 10:43:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:00.886 10:43:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:00.886 10:43:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:00.886 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:00.886 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:00.886 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:00.886 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:00.886 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:00.886 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:00.886 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:00.886 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:00.886 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:00.886 1+0 records in 00:12:00.886 1+0 records out 00:12:00.886 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000918382 s, 4.5 MB/s 00:12:00.886 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:00.886 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:00.886 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:00.886 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:00.886 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:00.886 10:43:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:00.886 10:43:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:00.886 10:43:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:12:01.147 10:43:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:01.147 10:43:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:01.147 10:43:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:01.147 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:12:01.147 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:01.147 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:01.147 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:01.147 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:12:01.147 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:01.147 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:01.147 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:01.147 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:01.147 1+0 records in 00:12:01.147 1+0 records out 00:12:01.147 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00127385 s, 3.2 MB/s 00:12:01.147 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:01.147 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:01.147 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:01.147 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:01.147 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:01.147 10:43:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:01.147 10:43:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:01.147 10:43:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:12:01.408 10:43:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:01.408 10:43:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:01.408 10:43:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:01.408 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:12:01.408 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:01.408 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:01.408 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:01.408 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:12:01.408 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:01.408 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:01.408 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:01.408 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:01.408 1+0 records in 00:12:01.408 1+0 records out 00:12:01.408 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00098646 s, 4.2 MB/s 00:12:01.408 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:01.408 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:01.408 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:01.408 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:01.408 10:43:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:01.408 10:43:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:01.408 10:43:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:01.409 10:43:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:12:01.669 10:43:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:01.669 10:43:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:01.669 10:43:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:01.669 10:43:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:12:01.669 10:43:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:01.669 10:43:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:01.669 10:43:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:01.669 10:43:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:12:01.669 10:43:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:01.669 10:43:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:01.669 10:43:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:01.669 10:43:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:01.669 1+0 records in 00:12:01.669 1+0 records out 00:12:01.669 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108876 s, 3.8 MB/s 00:12:01.669 10:43:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:01.669 10:43:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:01.669 10:43:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:01.669 10:43:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:01.669 10:43:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:01.669 10:43:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:01.669 10:43:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:01.669 10:43:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:12:01.931 10:43:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:01.931 10:43:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:01.931 10:43:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:01.931 10:43:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:12:01.931 10:43:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:01.931 10:43:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:01.931 10:43:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:01.931 10:43:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:12:01.931 10:43:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:01.931 10:43:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:01.931 10:43:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:01.931 10:43:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:01.931 1+0 records in 00:12:01.931 1+0 records out 00:12:01.931 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00127371 s, 3.2 MB/s 00:12:01.931 10:43:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:01.931 10:43:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:01.931 10:43:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:01.931 10:43:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:01.931 10:43:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:01.931 10:43:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:01.931 10:43:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:01.931 10:43:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:02.192 10:43:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:02.192 { 00:12:02.192 "nbd_device": "/dev/nbd0", 00:12:02.192 "bdev_name": "nvme0n1" 00:12:02.192 }, 00:12:02.192 { 00:12:02.192 "nbd_device": "/dev/nbd1", 00:12:02.192 "bdev_name": "nvme1n1" 00:12:02.192 }, 00:12:02.192 { 00:12:02.192 "nbd_device": "/dev/nbd2", 00:12:02.192 "bdev_name": "nvme2n1" 00:12:02.192 }, 00:12:02.192 { 00:12:02.192 "nbd_device": "/dev/nbd3", 00:12:02.192 "bdev_name": "nvme2n2" 00:12:02.192 }, 00:12:02.192 { 00:12:02.192 "nbd_device": "/dev/nbd4", 00:12:02.192 "bdev_name": "nvme2n3" 00:12:02.192 }, 00:12:02.192 { 00:12:02.192 "nbd_device": "/dev/nbd5", 00:12:02.192 "bdev_name": "nvme3n1" 00:12:02.192 } 00:12:02.192 ]' 00:12:02.192 10:43:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:02.192 10:43:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:02.192 { 00:12:02.192 "nbd_device": "/dev/nbd0", 00:12:02.192 "bdev_name": "nvme0n1" 00:12:02.192 }, 00:12:02.192 { 00:12:02.192 "nbd_device": "/dev/nbd1", 00:12:02.192 "bdev_name": "nvme1n1" 00:12:02.192 }, 00:12:02.192 { 00:12:02.192 "nbd_device": "/dev/nbd2", 00:12:02.192 "bdev_name": "nvme2n1" 00:12:02.192 }, 00:12:02.192 { 00:12:02.192 "nbd_device": "/dev/nbd3", 00:12:02.192 "bdev_name": "nvme2n2" 00:12:02.192 }, 00:12:02.193 { 00:12:02.193 "nbd_device": "/dev/nbd4", 00:12:02.193 "bdev_name": "nvme2n3" 00:12:02.193 }, 00:12:02.193 { 00:12:02.193 "nbd_device": "/dev/nbd5", 00:12:02.193 "bdev_name": "nvme3n1" 00:12:02.193 } 00:12:02.193 ]' 00:12:02.193 10:43:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:02.193 10:43:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:02.193 10:43:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:02.193 10:43:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:02.193 10:43:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:02.193 10:43:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:02.193 10:43:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.193 10:43:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:02.454 10:43:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:02.454 10:43:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:02.454 10:43:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:02.454 10:43:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.454 10:43:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.454 10:43:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:02.454 10:43:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:02.454 10:43:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.454 10:43:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.454 10:43:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:02.714 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:02.714 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:02.714 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:02.714 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.714 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.714 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:02.714 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:02.714 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.714 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.714 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:02.975 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:02.975 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:02.975 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:02.975 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.975 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.975 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:02.975 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:02.975 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.975 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.975 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:02.975 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:02.975 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:02.975 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:02.975 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.975 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.975 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:02.975 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:02.975 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.975 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.975 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:03.236 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:03.237 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:03.237 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:03.237 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:03.237 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:03.237 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:03.237 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:03.237 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:03.237 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:03.237 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:03.498 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:03.498 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:03.498 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:03.498 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:03.498 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:03.498 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:03.498 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:03.498 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:03.498 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:03.498 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:03.498 10:43:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:03.760 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:03.760 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:03.760 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:03.760 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:03.760 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:03.760 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:03.760 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:03.760 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:03.760 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:03.760 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:03.760 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:03.760 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:03.760 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:03.760 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:03.760 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:03.760 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:03.760 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:03.760 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:03.760 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:03.760 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:03.760 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:03.760 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:03.760 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:03.760 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:03.760 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:03.760 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:03.760 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:03.760 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:12:04.021 /dev/nbd0 00:12:04.021 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:04.021 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:04.021 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:04.021 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:04.021 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:04.021 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:04.021 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:04.021 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:04.021 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:04.021 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:04.021 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:04.021 1+0 records in 00:12:04.021 1+0 records out 00:12:04.021 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000856922 s, 4.8 MB/s 00:12:04.021 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.021 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:04.021 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.021 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:04.021 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:04.021 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:04.021 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:04.021 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:12:04.284 /dev/nbd1 00:12:04.284 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:04.284 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:04.284 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:04.284 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:04.284 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:04.284 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:04.285 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:04.285 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:04.285 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:04.285 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:04.285 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:04.285 1+0 records in 00:12:04.285 1+0 records out 00:12:04.285 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00120159 s, 3.4 MB/s 00:12:04.285 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.285 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:04.285 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.285 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:04.285 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:04.285 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:04.285 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:04.285 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:12:04.611 /dev/nbd10 00:12:04.611 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:04.611 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:04.611 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:12:04.611 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:04.611 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:04.611 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:04.611 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:12:04.611 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:04.611 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:04.611 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:04.611 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:04.611 1+0 records in 00:12:04.611 1+0 records out 00:12:04.611 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116681 s, 3.5 MB/s 00:12:04.611 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.611 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:04.611 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.611 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:04.611 10:43:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:04.611 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:04.611 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:04.611 10:43:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:12:04.611 /dev/nbd11 00:12:04.872 10:43:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:04.872 10:43:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:04.872 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:12:04.872 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:04.872 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:04.872 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:04.872 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:12:04.872 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:04.872 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:04.872 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:04.872 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:04.872 1+0 records in 00:12:04.872 1+0 records out 00:12:04.872 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000838994 s, 4.9 MB/s 00:12:04.872 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.872 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:04.872 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.872 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:04.872 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:04.872 10:43:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:04.872 10:43:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:04.872 10:43:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:12:04.872 /dev/nbd12 00:12:04.872 10:43:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:04.872 10:43:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:04.872 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:12:04.872 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:04.872 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:04.872 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:04.872 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:12:04.872 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:04.872 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:04.872 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:04.873 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:04.873 1+0 records in 00:12:04.873 1+0 records out 00:12:04.873 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00075434 s, 5.4 MB/s 00:12:04.873 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.873 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:04.873 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.873 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:04.873 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:04.873 10:43:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:04.873 10:43:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:04.873 10:43:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:12:05.134 /dev/nbd13 00:12:05.134 10:43:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:05.134 10:43:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:05.134 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:12:05.134 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:05.134 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:05.134 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:05.134 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:12:05.134 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:05.134 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:05.134 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:05.134 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:05.134 1+0 records in 00:12:05.134 1+0 records out 00:12:05.134 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000823796 s, 5.0 MB/s 00:12:05.134 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.134 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:05.134 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.134 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:05.134 10:43:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:05.134 10:43:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:05.134 10:43:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:05.134 10:43:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:05.134 10:43:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:05.134 10:43:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:05.396 10:43:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:05.396 { 00:12:05.396 "nbd_device": "/dev/nbd0", 00:12:05.396 "bdev_name": "nvme0n1" 00:12:05.396 }, 00:12:05.396 { 00:12:05.396 "nbd_device": "/dev/nbd1", 00:12:05.396 "bdev_name": "nvme1n1" 00:12:05.396 }, 00:12:05.396 { 00:12:05.396 "nbd_device": "/dev/nbd10", 00:12:05.396 "bdev_name": "nvme2n1" 00:12:05.396 }, 00:12:05.396 { 00:12:05.396 "nbd_device": "/dev/nbd11", 00:12:05.396 "bdev_name": "nvme2n2" 00:12:05.396 }, 00:12:05.396 { 00:12:05.396 "nbd_device": "/dev/nbd12", 00:12:05.396 "bdev_name": "nvme2n3" 00:12:05.396 }, 00:12:05.396 { 00:12:05.396 "nbd_device": "/dev/nbd13", 00:12:05.396 "bdev_name": "nvme3n1" 00:12:05.396 } 00:12:05.396 ]' 00:12:05.396 10:43:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:05.396 { 00:12:05.396 "nbd_device": "/dev/nbd0", 00:12:05.396 "bdev_name": "nvme0n1" 00:12:05.396 }, 00:12:05.396 { 00:12:05.396 "nbd_device": "/dev/nbd1", 00:12:05.396 "bdev_name": "nvme1n1" 00:12:05.396 }, 00:12:05.396 { 00:12:05.396 "nbd_device": "/dev/nbd10", 00:12:05.396 "bdev_name": "nvme2n1" 00:12:05.396 }, 00:12:05.396 { 00:12:05.396 "nbd_device": "/dev/nbd11", 00:12:05.396 "bdev_name": "nvme2n2" 00:12:05.396 }, 00:12:05.396 { 00:12:05.396 "nbd_device": "/dev/nbd12", 00:12:05.396 "bdev_name": "nvme2n3" 00:12:05.396 }, 00:12:05.396 { 00:12:05.396 "nbd_device": "/dev/nbd13", 00:12:05.396 "bdev_name": "nvme3n1" 00:12:05.396 } 00:12:05.396 ]' 00:12:05.396 10:43:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:05.396 10:43:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:05.396 /dev/nbd1 00:12:05.396 /dev/nbd10 00:12:05.396 /dev/nbd11 00:12:05.396 /dev/nbd12 00:12:05.396 /dev/nbd13' 00:12:05.396 10:43:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:05.396 /dev/nbd1 00:12:05.396 /dev/nbd10 00:12:05.396 /dev/nbd11 00:12:05.396 /dev/nbd12 00:12:05.396 /dev/nbd13' 00:12:05.396 10:43:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:05.396 10:43:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:12:05.396 10:43:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:12:05.396 10:43:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:12:05.396 10:43:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:12:05.396 10:43:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:12:05.396 10:43:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:05.396 10:43:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:05.396 10:43:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:05.396 10:43:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:05.396 10:43:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:05.396 10:43:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:05.396 256+0 records in 00:12:05.396 256+0 records out 00:12:05.396 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113282 s, 92.6 MB/s 00:12:05.396 10:43:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:05.396 10:43:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:05.657 256+0 records in 00:12:05.657 256+0 records out 00:12:05.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.174585 s, 6.0 MB/s 00:12:05.657 10:43:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:05.657 10:43:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:05.918 256+0 records in 00:12:05.918 256+0 records out 00:12:05.918 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.159532 s, 6.6 MB/s 00:12:05.918 10:43:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:05.918 10:43:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:06.180 256+0 records in 00:12:06.180 256+0 records out 00:12:06.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.203261 s, 5.2 MB/s 00:12:06.180 10:43:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:06.180 10:43:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:06.180 256+0 records in 00:12:06.180 256+0 records out 00:12:06.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.235776 s, 4.4 MB/s 00:12:06.180 10:43:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:06.180 10:43:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:06.441 256+0 records in 00:12:06.441 256+0 records out 00:12:06.441 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.238832 s, 4.4 MB/s 00:12:06.441 10:43:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:06.441 10:43:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:06.703 256+0 records in 00:12:06.703 256+0 records out 00:12:06.703 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.246357 s, 4.3 MB/s 00:12:06.703 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:12:06.703 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:06.703 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:06.703 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:06.703 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:06.703 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:06.703 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:06.703 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:06.703 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:06.703 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:06.703 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:06.703 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:06.703 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:06.703 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:06.703 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:06.964 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:06.964 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:06.964 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:06.964 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:06.964 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:06.964 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:06.964 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:06.964 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:06.964 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:06.964 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:06.964 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:06.964 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:06.964 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:06.964 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:06.964 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:06.964 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:06.964 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:06.964 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:06.964 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:06.964 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:06.964 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:06.964 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:07.225 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:07.225 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:07.225 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:07.225 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:07.225 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:07.225 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:07.225 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:07.225 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:07.226 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:07.226 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:07.486 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:07.486 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:07.486 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:07.486 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:07.486 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:07.486 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:07.486 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:07.486 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:07.486 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:07.486 10:43:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:07.747 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:07.748 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:07.748 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:07.748 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:07.748 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:07.748 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:07.748 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:07.748 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:07.748 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:07.748 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:08.008 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:08.008 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:08.008 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:08.008 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:08.008 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:08.008 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:08.008 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:08.008 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:08.008 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:08.008 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:08.270 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:08.270 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:08.270 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:08.270 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:08.270 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:08.270 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:08.270 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:08.270 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:08.270 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:08.270 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:08.270 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:08.532 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:08.532 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:08.532 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:08.532 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:08.532 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:08.532 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:08.532 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:08.532 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:08.532 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:08.532 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:12:08.532 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:08.532 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:12:08.532 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:08.532 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:08.532 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:12:08.532 10:43:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:08.793 malloc_lvol_verify 00:12:08.793 10:43:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:08.793 e23f80c5-e846-4a70-9184-9b81246c38b2 00:12:08.793 10:43:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:09.055 2d121206-4b24-42ea-87bf-5404794b858c 00:12:09.055 10:43:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:09.316 /dev/nbd0 00:12:09.316 10:43:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:12:09.316 10:43:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:12:09.316 10:43:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:12:09.316 10:43:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:12:09.316 10:43:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:12:09.316 mke2fs 1.47.0 (5-Feb-2023) 00:12:09.316 Discarding device blocks: 0/4096 done 00:12:09.316 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:09.316 00:12:09.316 Allocating group tables: 0/1 done 00:12:09.316 Writing inode tables: 0/1 done 00:12:09.316 Creating journal (1024 blocks): done 00:12:09.316 Writing superblocks and filesystem accounting information: 0/1 done 00:12:09.316 00:12:09.316 10:43:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:09.316 10:43:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:09.316 10:43:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:09.316 10:43:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:09.316 10:43:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:09.316 10:43:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:09.316 10:43:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:09.577 10:43:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:09.577 10:43:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:09.577 10:43:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:09.577 10:43:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:09.577 10:43:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:09.577 10:43:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:09.577 10:43:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:09.577 10:43:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:09.577 10:43:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 82602 00:12:09.577 10:43:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 82602 ']' 00:12:09.577 10:43:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 82602 00:12:09.577 10:43:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:12:09.577 10:43:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:09.577 10:43:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82602 00:12:09.577 killing process with pid 82602 00:12:09.578 10:43:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:09.578 10:43:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:09.578 10:43:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82602' 00:12:09.578 10:43:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 82602 00:12:09.578 10:43:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 82602 00:12:09.839 10:43:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:12:09.839 00:12:09.839 real 0m10.232s 00:12:09.839 user 0m13.928s 00:12:09.839 sys 0m3.781s 00:12:09.839 ************************************ 00:12:09.839 END TEST bdev_nbd 00:12:09.839 ************************************ 00:12:09.839 10:43:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:09.839 10:43:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:09.839 10:43:30 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:12:09.839 10:43:30 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:12:09.839 10:43:30 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:12:09.839 10:43:30 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:12:09.839 10:43:30 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:09.839 10:43:30 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:09.839 10:43:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:09.839 ************************************ 00:12:09.839 START TEST bdev_fio 00:12:09.839 ************************************ 00:12:09.839 10:43:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:12:09.839 10:43:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:12:09.839 10:43:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:09.839 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:09.839 10:43:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:09.839 10:43:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:12:09.839 10:43:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:12:09.839 10:43:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:12:09.839 10:43:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:09.839 10:43:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:09.839 10:43:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:12:09.839 10:43:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:12:09.839 10:43:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:12:09.839 10:43:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:12:09.839 10:43:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:09.839 10:43:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:12:09.839 10:43:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:12:09.839 10:43:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:09.839 10:43:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:12:09.839 10:43:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:12:09.839 10:43:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:12:09.839 10:43:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:12:09.839 10:43:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:12:09.839 10:43:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:09.839 10:43:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:12:09.839 10:43:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:09.839 10:43:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:12:09.839 10:43:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:12:09.839 10:43:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:09.839 10:43:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:12:09.839 10:43:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:12:09.839 10:43:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:09.839 10:43:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:12:09.839 10:43:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:12:09.839 10:43:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:09.839 10:43:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:12:09.839 10:43:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:12:09.839 10:43:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:09.840 10:43:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:12:09.840 10:43:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:12:09.840 10:43:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:12:09.840 10:43:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:12:09.840 10:43:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:12:09.840 10:43:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:09.840 10:43:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:09.840 10:43:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:12:09.840 10:43:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:09.840 10:43:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:12:09.840 ************************************ 00:12:09.840 START TEST bdev_fio_rw_verify 00:12:09.840 ************************************ 00:12:09.840 10:43:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:09.840 10:43:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:09.840 10:43:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:12:09.840 10:43:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:09.840 10:43:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:12:09.840 10:43:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:09.840 10:43:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:12:09.840 10:43:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:12:09.840 10:43:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:12:09.840 10:43:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:09.840 10:43:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:12:09.840 10:43:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:12:09.840 10:43:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:09.840 10:43:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:09.840 10:43:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:12:09.840 10:43:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:09.840 10:43:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:10.101 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:10.101 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:10.101 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:10.102 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:10.102 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:10.102 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:10.102 fio-3.35 00:12:10.102 Starting 6 threads 00:12:21.232 00:12:21.232 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=83000: Tue Oct 8 10:43:41 2024 00:12:21.232 read: IOPS=11.8k, BW=46.2MiB/s (48.5MB/s)(463MiB/10002msec) 00:12:21.232 slat (usec): min=2, max=2737, avg= 7.17, stdev=14.97 00:12:21.232 clat (usec): min=102, max=727074, avg=1744.05, stdev=6023.82 00:12:21.232 lat (usec): min=106, max=727093, avg=1751.23, stdev=6023.91 00:12:21.232 clat percentiles (usec): 00:12:21.232 | 50.000th=[ 1582], 99.000th=[ 4359], 99.900th=[ 6128], 00:12:21.232 | 99.990th=[ 11600], 99.999th=[725615] 00:12:21.232 write: IOPS=12.1k, BW=47.1MiB/s (49.4MB/s)(471MiB/10002msec); 0 zone resets 00:12:21.232 slat (usec): min=12, max=6768, avg=45.09, stdev=164.57 00:12:21.232 clat (usec): min=110, max=11234, avg=1917.27, stdev=945.10 00:12:21.232 lat (usec): min=127, max=11261, avg=1962.36, stdev=960.25 00:12:21.232 clat percentiles (usec): 00:12:21.232 | 50.000th=[ 1762], 99.000th=[ 4883], 99.900th=[ 6587], 99.990th=[ 8455], 00:12:21.232 | 99.999th=[11207] 00:12:21.232 bw ( KiB/s): min=44796, max=50112, per=100.00%, avg=48290.11, stdev=409.28, samples=114 00:12:21.232 iops : min=11196, max=12528, avg=12071.32, stdev=102.31, samples=114 00:12:21.232 lat (usec) : 250=0.48%, 500=2.89%, 750=5.29%, 1000=8.41% 00:12:21.232 lat (msec) : 2=48.37%, 4=31.99%, 10=2.56%, 20=0.01%, 750=0.01% 00:12:21.232 cpu : usr=47.39%, sys=30.00%, ctx=4084, majf=0, minf=12899 00:12:21.232 IO depths : 1=11.3%, 2=23.7%, 4=51.3%, 8=13.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:21.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.232 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.232 issued rwts: total=118417,120640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:21.232 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:21.232 00:12:21.232 Run status group 0 (all jobs): 00:12:21.232 READ: bw=46.2MiB/s (48.5MB/s), 46.2MiB/s-46.2MiB/s (48.5MB/s-48.5MB/s), io=463MiB (485MB), run=10002-10002msec 00:12:21.232 WRITE: bw=47.1MiB/s (49.4MB/s), 47.1MiB/s-47.1MiB/s (49.4MB/s-49.4MB/s), io=471MiB (494MB), run=10002-10002msec 00:12:21.232 ----------------------------------------------------- 00:12:21.232 Suppressions used: 00:12:21.232 count bytes template 00:12:21.232 6 48 /usr/src/fio/parse.c 00:12:21.232 2146 206016 /usr/src/fio/iolog.c 00:12:21.232 1 8 libtcmalloc_minimal.so 00:12:21.232 1 904 libcrypto.so 00:12:21.232 ----------------------------------------------------- 00:12:21.232 00:12:21.232 00:12:21.232 real 0m11.204s 00:12:21.232 user 0m29.175s 00:12:21.232 sys 0m18.350s 00:12:21.232 10:43:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:21.232 10:43:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:12:21.232 ************************************ 00:12:21.232 END TEST bdev_fio_rw_verify 00:12:21.232 ************************************ 00:12:21.232 10:43:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:12:21.232 10:43:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:21.232 10:43:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:12:21.232 10:43:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:21.232 10:43:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:12:21.232 10:43:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:12:21.232 10:43:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:12:21.232 10:43:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:12:21.232 10:43:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:21.232 10:43:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:12:21.232 10:43:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:12:21.232 10:43:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:21.232 10:43:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:12:21.232 10:43:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:12:21.232 10:43:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:12:21.232 10:43:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:12:21.232 10:43:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "814b3132-c0f3-485d-bcc9-b90a66cafd8a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "814b3132-c0f3-485d-bcc9-b90a66cafd8a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "61545dff-6914-40cd-8be4-ba79ab2a8ed5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "61545dff-6914-40cd-8be4-ba79ab2a8ed5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "4a0d9896-8290-4997-9af4-07e79d213a68"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4a0d9896-8290-4997-9af4-07e79d213a68",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "9cc7dee8-05df-4d09-97f3-df9c0d608ebc"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9cc7dee8-05df-4d09-97f3-df9c0d608ebc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "ali 10:43:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:21.233 ases": [' ' "ad310326-632f-47f6-8d1e-cc1836a45003"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ad310326-632f-47f6-8d1e-cc1836a45003",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "e5edace7-8d1b-4357-b7a5-0ca9f0798423"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "e5edace7-8d1b-4357-b7a5-0ca9f0798423",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:12:21.233 10:43:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:12:21.233 10:43:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:21.233 /home/vagrant/spdk_repo/spdk 00:12:21.233 10:43:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:12:21.233 10:43:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:12:21.233 10:43:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:12:21.233 00:12:21.233 real 0m11.368s 00:12:21.233 user 0m29.248s 00:12:21.233 sys 0m18.422s 00:12:21.233 10:43:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:21.233 10:43:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:12:21.233 ************************************ 00:12:21.233 END TEST bdev_fio 00:12:21.233 ************************************ 00:12:21.233 10:43:41 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:21.233 10:43:41 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:21.233 10:43:41 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:12:21.233 10:43:41 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:21.233 10:43:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:21.233 ************************************ 00:12:21.233 START TEST bdev_verify 00:12:21.233 ************************************ 00:12:21.233 10:43:41 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:21.233 [2024-10-08 10:43:41.748275] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:12:21.233 [2024-10-08 10:43:41.748432] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83171 ] 00:12:21.493 [2024-10-08 10:43:41.882300] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:21.493 [2024-10-08 10:43:41.902342] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:21.493 [2024-10-08 10:43:41.953408] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.493 [2024-10-08 10:43:41.953490] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.754 Running I/O for 5 seconds... 00:12:24.086 25824.00 IOPS, 100.88 MiB/s [2024-10-08T10:43:45.608Z] 24944.00 IOPS, 97.44 MiB/s [2024-10-08T10:43:46.553Z] 25482.67 IOPS, 99.54 MiB/s [2024-10-08T10:43:47.497Z] 25096.00 IOPS, 98.03 MiB/s [2024-10-08T10:43:47.497Z] 24736.00 IOPS, 96.62 MiB/s 00:12:26.920 Latency(us) 00:12:26.920 [2024-10-08T10:43:47.497Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:26.920 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:26.920 Verification LBA range: start 0x0 length 0xa0000 00:12:26.920 nvme0n1 : 5.05 1925.10 7.52 0.00 0.00 66349.58 8469.27 77433.30 00:12:26.920 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:26.920 Verification LBA range: start 0xa0000 length 0xa0000 00:12:26.920 nvme0n1 : 5.06 1972.51 7.71 0.00 0.00 64780.59 9931.22 70173.93 00:12:26.920 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:26.920 Verification LBA range: start 0x0 length 0xbd0bd 00:12:26.920 nvme1n1 : 5.06 2433.87 9.51 0.00 0.00 52237.34 6276.33 62914.56 00:12:26.920 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:26.920 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:12:26.920 nvme1n1 : 5.05 2456.95 9.60 0.00 0.00 51891.58 5419.32 59284.87 00:12:26.920 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:26.920 Verification LBA range: start 0x0 length 0x80000 00:12:26.920 nvme2n1 : 5.07 1945.10 7.60 0.00 0.00 65207.98 8418.86 69770.63 00:12:26.920 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:26.920 Verification LBA range: start 0x80000 length 0x80000 00:12:26.920 nvme2n1 : 5.07 1995.74 7.80 0.00 0.00 63612.24 9175.04 70173.93 00:12:26.920 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:26.920 Verification LBA range: start 0x0 length 0x80000 00:12:26.920 nvme2n2 : 5.07 1917.95 7.49 0.00 0.00 65978.07 8771.74 75013.51 00:12:26.920 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:26.920 Verification LBA range: start 0x80000 length 0x80000 00:12:26.920 nvme2n2 : 5.08 1966.02 7.68 0.00 0.00 64454.50 16232.76 60091.47 00:12:26.920 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:26.920 Verification LBA range: start 0x0 length 0x80000 00:12:26.920 nvme2n3 : 5.08 1915.73 7.48 0.00 0.00 65907.07 6200.71 81466.29 00:12:26.920 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:26.920 Verification LBA range: start 0x80000 length 0x80000 00:12:26.920 nvme2n3 : 5.08 1964.01 7.67 0.00 0.00 64413.84 13308.85 65334.35 00:12:26.920 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:26.920 Verification LBA range: start 0x0 length 0x20000 00:12:26.920 nvme3n1 : 5.10 1934.22 7.56 0.00 0.00 65207.53 3503.66 81466.29 00:12:26.920 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:26.920 Verification LBA range: start 0x20000 length 0x20000 00:12:26.920 nvme3n1 : 5.07 1968.47 7.69 0.00 0.00 64153.28 7763.50 70173.93 00:12:26.920 [2024-10-08T10:43:47.497Z] =================================================================================================================== 00:12:26.920 [2024-10-08T10:43:47.497Z] Total : 24395.67 95.30 0.00 0.00 62409.42 3503.66 81466.29 00:12:27.197 00:12:27.197 real 0m5.885s 00:12:27.197 user 0m9.383s 00:12:27.197 sys 0m1.433s 00:12:27.197 10:43:47 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:27.197 ************************************ 00:12:27.197 END TEST bdev_verify 00:12:27.197 ************************************ 00:12:27.197 10:43:47 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:12:27.197 10:43:47 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:27.197 10:43:47 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:12:27.197 10:43:47 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:27.197 10:43:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:27.197 ************************************ 00:12:27.197 START TEST bdev_verify_big_io 00:12:27.197 ************************************ 00:12:27.197 10:43:47 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:27.197 [2024-10-08 10:43:47.703148] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:12:27.197 [2024-10-08 10:43:47.703305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83265 ] 00:12:27.481 [2024-10-08 10:43:47.838174] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:27.481 [2024-10-08 10:43:47.858396] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:27.481 [2024-10-08 10:43:47.907999] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.481 [2024-10-08 10:43:47.908089] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.743 Running I/O for 5 seconds... 00:12:33.597 1828.00 IOPS, 114.25 MiB/s [2024-10-08T10:43:54.745Z] 2985.50 IOPS, 186.59 MiB/s 00:12:34.168 Latency(us) 00:12:34.168 [2024-10-08T10:43:54.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:34.168 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:34.168 Verification LBA range: start 0x0 length 0xa000 00:12:34.168 nvme0n1 : 6.09 105.15 6.57 0.00 0.00 1184654.97 13006.38 1574477.19 00:12:34.168 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:34.168 Verification LBA range: start 0xa000 length 0xa000 00:12:34.168 nvme0n1 : 5.71 112.11 7.01 0.00 0.00 1040510.03 274242.95 929199.66 00:12:34.168 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:34.168 Verification LBA range: start 0x0 length 0xbd0b 00:12:34.168 nvme1n1 : 6.09 105.12 6.57 0.00 0.00 1109973.78 56461.78 1309913.40 00:12:34.168 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:34.168 Verification LBA range: start 0xbd0b length 0xbd0b 00:12:34.168 nvme1n1 : 5.93 180.67 11.29 0.00 0.00 639222.35 7965.14 774333.05 00:12:34.168 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:34.168 Verification LBA range: start 0x0 length 0x8000 00:12:34.168 nvme2n1 : 6.15 88.44 5.53 0.00 0.00 1259836.37 10435.35 1206669.00 00:12:34.168 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:34.168 Verification LBA range: start 0x8000 length 0x8000 00:12:34.168 nvme2n1 : 5.95 118.30 7.39 0.00 0.00 1002318.98 102437.81 1516402.22 00:12:34.168 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:34.168 Verification LBA range: start 0x0 length 0x8000 00:12:34.168 nvme2n2 : 6.15 104.01 6.50 0.00 0.00 1024202.59 61704.66 1974549.27 00:12:34.168 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:34.168 Verification LBA range: start 0x8000 length 0x8000 00:12:34.168 nvme2n2 : 5.94 145.54 9.10 0.00 0.00 777400.67 21677.29 871124.68 00:12:34.168 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:34.168 Verification LBA range: start 0x0 length 0x8000 00:12:34.168 nvme2n3 : 6.17 111.49 6.97 0.00 0.00 924341.46 23693.78 2787598.97 00:12:34.168 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:34.168 Verification LBA range: start 0x8000 length 0x8000 00:12:34.168 nvme2n3 : 5.96 158.46 9.90 0.00 0.00 710944.38 15022.87 1013085.74 00:12:34.168 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:34.168 Verification LBA range: start 0x0 length 0x2000 00:12:34.168 nvme3n1 : 6.36 171.19 10.70 0.00 0.00 575947.38 304.05 3252198.79 00:12:34.168 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:34.168 Verification LBA range: start 0x2000 length 0x2000 00:12:34.168 nvme3n1 : 5.96 148.17 9.26 0.00 0.00 738331.61 6251.13 1961643.72 00:12:34.168 [2024-10-08T10:43:54.745Z] =================================================================================================================== 00:12:34.168 [2024-10-08T10:43:54.745Z] Total : 1548.64 96.79 0.00 0.00 869106.51 304.05 3252198.79 00:12:34.429 00:12:34.429 real 0m7.193s 00:12:34.429 user 0m13.193s 00:12:34.429 sys 0m0.449s 00:12:34.429 10:43:54 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:34.429 ************************************ 00:12:34.429 END TEST bdev_verify_big_io 00:12:34.429 ************************************ 00:12:34.429 10:43:54 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.429 10:43:54 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:34.429 10:43:54 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:12:34.429 10:43:54 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:34.429 10:43:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:34.429 ************************************ 00:12:34.429 START TEST bdev_write_zeroes 00:12:34.429 ************************************ 00:12:34.429 10:43:54 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:34.429 [2024-10-08 10:43:54.961739] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:12:34.429 [2024-10-08 10:43:54.961892] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83369 ] 00:12:34.690 [2024-10-08 10:43:55.094746] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:34.690 [2024-10-08 10:43:55.116614] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.690 [2024-10-08 10:43:55.167417] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.951 Running I/O for 1 seconds... 00:12:35.894 74208.00 IOPS, 289.88 MiB/s 00:12:35.894 Latency(us) 00:12:35.894 [2024-10-08T10:43:56.471Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:35.894 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:35.894 nvme0n1 : 1.02 12167.10 47.53 0.00 0.00 10509.85 6225.92 23693.78 00:12:35.894 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:35.894 nvme1n1 : 1.02 13323.65 52.05 0.00 0.00 9587.14 5394.12 19055.85 00:12:35.894 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:35.894 nvme2n1 : 1.02 12152.84 47.47 0.00 0.00 10437.92 4612.73 22080.59 00:12:35.894 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:35.894 nvme2n2 : 1.03 12104.36 47.28 0.00 0.00 10470.83 4663.14 22685.54 00:12:35.894 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:35.894 nvme2n3 : 1.02 12069.23 47.15 0.00 0.00 10492.19 4713.55 21979.77 00:12:35.894 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:35.894 nvme3n1 : 1.02 12055.38 47.09 0.00 0.00 10496.22 4839.58 22383.06 00:12:35.894 [2024-10-08T10:43:56.471Z] =================================================================================================================== 00:12:35.894 [2024-10-08T10:43:56.471Z] Total : 73872.56 288.56 0.00 0.00 10320.73 4612.73 23693.78 00:12:36.154 00:12:36.155 real 0m1.762s 00:12:36.155 user 0m1.076s 00:12:36.155 sys 0m0.512s 00:12:36.155 10:43:56 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:36.155 ************************************ 00:12:36.155 END TEST bdev_write_zeroes 00:12:36.155 ************************************ 00:12:36.155 10:43:56 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:12:36.155 10:43:56 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:36.155 10:43:56 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:12:36.155 10:43:56 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:36.155 10:43:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:36.155 ************************************ 00:12:36.155 START TEST bdev_json_nonenclosed 00:12:36.155 ************************************ 00:12:36.155 10:43:56 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:36.416 [2024-10-08 10:43:56.793397] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:12:36.416 [2024-10-08 10:43:56.793539] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83410 ] 00:12:36.416 [2024-10-08 10:43:56.927235] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:36.416 [2024-10-08 10:43:56.949000] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.678 [2024-10-08 10:43:56.999260] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.678 [2024-10-08 10:43:56.999377] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:12:36.678 [2024-10-08 10:43:56.999397] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:36.678 [2024-10-08 10:43:56.999409] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:36.678 00:12:36.678 real 0m0.385s 00:12:36.678 user 0m0.159s 00:12:36.678 sys 0m0.121s 00:12:36.678 10:43:57 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:36.678 ************************************ 00:12:36.678 END TEST bdev_json_nonenclosed 00:12:36.678 ************************************ 00:12:36.678 10:43:57 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:12:36.678 10:43:57 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:36.678 10:43:57 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:12:36.678 10:43:57 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:36.678 10:43:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:36.678 ************************************ 00:12:36.678 START TEST bdev_json_nonarray 00:12:36.678 ************************************ 00:12:36.678 10:43:57 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:36.678 [2024-10-08 10:43:57.243696] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:12:36.678 [2024-10-08 10:43:57.243857] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83431 ] 00:12:36.939 [2024-10-08 10:43:57.376161] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:36.939 [2024-10-08 10:43:57.397713] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.939 [2024-10-08 10:43:57.446515] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.939 [2024-10-08 10:43:57.446638] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:12:36.939 [2024-10-08 10:43:57.446661] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:36.939 [2024-10-08 10:43:57.446672] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:37.201 00:12:37.201 real 0m0.378s 00:12:37.201 user 0m0.150s 00:12:37.201 sys 0m0.124s 00:12:37.201 10:43:57 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:37.201 ************************************ 00:12:37.201 END TEST bdev_json_nonarray 00:12:37.201 ************************************ 00:12:37.201 10:43:57 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:12:37.201 10:43:57 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:12:37.201 10:43:57 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:12:37.201 10:43:57 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:12:37.201 10:43:57 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:12:37.201 10:43:57 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:12:37.201 10:43:57 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:12:37.201 10:43:57 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:37.201 10:43:57 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:12:37.201 10:43:57 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:12:37.201 10:43:57 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:12:37.201 10:43:57 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:12:37.201 10:43:57 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:37.775 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:40.323 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:40.323 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:40.323 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:40.323 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:40.323 ************************************ 00:12:40.323 END TEST blockdev_xnvme 00:12:40.323 ************************************ 00:12:40.323 00:12:40.323 real 0m50.894s 00:12:40.323 user 1m19.675s 00:12:40.323 sys 0m31.080s 00:12:40.323 10:44:00 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:40.323 10:44:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:40.585 10:44:00 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:12:40.585 10:44:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:40.585 10:44:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:40.585 10:44:00 -- common/autotest_common.sh@10 -- # set +x 00:12:40.585 ************************************ 00:12:40.585 START TEST ublk 00:12:40.585 ************************************ 00:12:40.585 10:44:00 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:12:40.585 * Looking for test storage... 00:12:40.585 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:12:40.585 10:44:00 ublk -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:40.585 10:44:00 ublk -- common/autotest_common.sh@1681 -- # lcov --version 00:12:40.585 10:44:00 ublk -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:40.585 10:44:01 ublk -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:40.585 10:44:01 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:40.585 10:44:01 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:40.585 10:44:01 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:40.585 10:44:01 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:12:40.585 10:44:01 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:12:40.585 10:44:01 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:12:40.585 10:44:01 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:12:40.585 10:44:01 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:12:40.585 10:44:01 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:12:40.585 10:44:01 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:12:40.585 10:44:01 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:40.585 10:44:01 ublk -- scripts/common.sh@344 -- # case "$op" in 00:12:40.585 10:44:01 ublk -- scripts/common.sh@345 -- # : 1 00:12:40.585 10:44:01 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:40.585 10:44:01 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:40.585 10:44:01 ublk -- scripts/common.sh@365 -- # decimal 1 00:12:40.585 10:44:01 ublk -- scripts/common.sh@353 -- # local d=1 00:12:40.585 10:44:01 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:40.585 10:44:01 ublk -- scripts/common.sh@355 -- # echo 1 00:12:40.585 10:44:01 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:12:40.585 10:44:01 ublk -- scripts/common.sh@366 -- # decimal 2 00:12:40.585 10:44:01 ublk -- scripts/common.sh@353 -- # local d=2 00:12:40.585 10:44:01 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:40.585 10:44:01 ublk -- scripts/common.sh@355 -- # echo 2 00:12:40.585 10:44:01 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:12:40.585 10:44:01 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:40.585 10:44:01 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:40.585 10:44:01 ublk -- scripts/common.sh@368 -- # return 0 00:12:40.585 10:44:01 ublk -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:40.585 10:44:01 ublk -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:40.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.585 --rc genhtml_branch_coverage=1 00:12:40.585 --rc genhtml_function_coverage=1 00:12:40.585 --rc genhtml_legend=1 00:12:40.585 --rc geninfo_all_blocks=1 00:12:40.585 --rc geninfo_unexecuted_blocks=1 00:12:40.585 00:12:40.585 ' 00:12:40.585 10:44:01 ublk -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:40.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.585 --rc genhtml_branch_coverage=1 00:12:40.585 --rc genhtml_function_coverage=1 00:12:40.585 --rc genhtml_legend=1 00:12:40.585 --rc geninfo_all_blocks=1 00:12:40.585 --rc geninfo_unexecuted_blocks=1 00:12:40.585 00:12:40.585 ' 00:12:40.585 10:44:01 ublk -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:40.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.585 --rc genhtml_branch_coverage=1 00:12:40.585 --rc genhtml_function_coverage=1 00:12:40.585 --rc genhtml_legend=1 00:12:40.585 --rc geninfo_all_blocks=1 00:12:40.585 --rc geninfo_unexecuted_blocks=1 00:12:40.585 00:12:40.585 ' 00:12:40.585 10:44:01 ublk -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:40.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.585 --rc genhtml_branch_coverage=1 00:12:40.585 --rc genhtml_function_coverage=1 00:12:40.585 --rc genhtml_legend=1 00:12:40.585 --rc geninfo_all_blocks=1 00:12:40.585 --rc geninfo_unexecuted_blocks=1 00:12:40.585 00:12:40.585 ' 00:12:40.585 10:44:01 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:12:40.585 10:44:01 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:12:40.585 10:44:01 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:12:40.585 10:44:01 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:12:40.585 10:44:01 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:12:40.585 10:44:01 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:12:40.585 10:44:01 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:12:40.585 10:44:01 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:12:40.585 10:44:01 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:12:40.585 10:44:01 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:12:40.585 10:44:01 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:12:40.585 10:44:01 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:12:40.585 10:44:01 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:12:40.585 10:44:01 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:12:40.585 10:44:01 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:12:40.585 10:44:01 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:12:40.585 10:44:01 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:12:40.585 10:44:01 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:12:40.585 10:44:01 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:12:40.585 10:44:01 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:12:40.585 10:44:01 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:40.585 10:44:01 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:40.585 10:44:01 ublk -- common/autotest_common.sh@10 -- # set +x 00:12:40.585 ************************************ 00:12:40.585 START TEST test_save_ublk_config 00:12:40.585 ************************************ 00:12:40.585 10:44:01 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:12:40.585 10:44:01 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:12:40.585 10:44:01 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=83718 00:12:40.585 10:44:01 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:12:40.585 10:44:01 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 83718 00:12:40.585 10:44:01 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:12:40.585 10:44:01 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 83718 ']' 00:12:40.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.585 10:44:01 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.585 10:44:01 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:40.585 10:44:01 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.585 10:44:01 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:40.585 10:44:01 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:12:40.846 [2024-10-08 10:44:01.182645] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:12:40.846 [2024-10-08 10:44:01.182808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83718 ] 00:12:40.846 [2024-10-08 10:44:01.315751] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:40.846 [2024-10-08 10:44:01.335666] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.846 [2024-10-08 10:44:01.396462] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.789 10:44:02 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:41.789 10:44:02 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:12:41.789 10:44:02 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:12:41.789 10:44:02 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:12:41.789 10:44:02 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.789 10:44:02 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:12:41.789 [2024-10-08 10:44:02.047833] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:12:41.789 [2024-10-08 10:44:02.048187] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:12:41.789 malloc0 00:12:41.789 [2024-10-08 10:44:02.079957] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:12:41.789 [2024-10-08 10:44:02.080065] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:12:41.789 [2024-10-08 10:44:02.080077] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:12:41.789 [2024-10-08 10:44:02.080090] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:12:41.789 [2024-10-08 10:44:02.089520] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:41.789 [2024-10-08 10:44:02.089543] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:41.789 [2024-10-08 10:44:02.096820] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:41.789 [2024-10-08 10:44:02.096940] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:12:41.789 [2024-10-08 10:44:02.113828] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:12:41.789 0 00:12:41.789 10:44:02 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.789 10:44:02 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:12:41.789 10:44:02 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.789 10:44:02 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:12:42.051 10:44:02 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.051 10:44:02 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:12:42.051 "subsystems": [ 00:12:42.051 { 00:12:42.051 "subsystem": "fsdev", 00:12:42.051 "config": [ 00:12:42.051 { 00:12:42.051 "method": "fsdev_set_opts", 00:12:42.051 "params": { 00:12:42.051 "fsdev_io_pool_size": 65535, 00:12:42.051 "fsdev_io_cache_size": 256 00:12:42.051 } 00:12:42.051 } 00:12:42.051 ] 00:12:42.051 }, 00:12:42.051 { 00:12:42.051 "subsystem": "keyring", 00:12:42.051 "config": [] 00:12:42.051 }, 00:12:42.051 { 00:12:42.051 "subsystem": "iobuf", 00:12:42.051 "config": [ 00:12:42.051 { 00:12:42.051 "method": "iobuf_set_options", 00:12:42.051 "params": { 00:12:42.051 "small_pool_count": 8192, 00:12:42.051 "large_pool_count": 1024, 00:12:42.051 "small_bufsize": 8192, 00:12:42.051 "large_bufsize": 135168 00:12:42.051 } 00:12:42.051 } 00:12:42.051 ] 00:12:42.051 }, 00:12:42.051 { 00:12:42.051 "subsystem": "sock", 00:12:42.051 "config": [ 00:12:42.051 { 00:12:42.051 "method": "sock_set_default_impl", 00:12:42.051 "params": { 00:12:42.051 "impl_name": "posix" 00:12:42.051 } 00:12:42.051 }, 00:12:42.051 { 00:12:42.051 "method": "sock_impl_set_options", 00:12:42.051 "params": { 00:12:42.051 "impl_name": "ssl", 00:12:42.051 "recv_buf_size": 4096, 00:12:42.051 "send_buf_size": 4096, 00:12:42.051 "enable_recv_pipe": true, 00:12:42.051 "enable_quickack": false, 00:12:42.051 "enable_placement_id": 0, 00:12:42.051 "enable_zerocopy_send_server": true, 00:12:42.051 "enable_zerocopy_send_client": false, 00:12:42.051 "zerocopy_threshold": 0, 00:12:42.051 "tls_version": 0, 00:12:42.051 "enable_ktls": false 00:12:42.051 } 00:12:42.051 }, 00:12:42.051 { 00:12:42.051 "method": "sock_impl_set_options", 00:12:42.051 "params": { 00:12:42.051 "impl_name": "posix", 00:12:42.051 "recv_buf_size": 2097152, 00:12:42.051 "send_buf_size": 2097152, 00:12:42.051 "enable_recv_pipe": true, 00:12:42.051 "enable_quickack": false, 00:12:42.051 "enable_placement_id": 0, 00:12:42.051 "enable_zerocopy_send_server": true, 00:12:42.051 "enable_zerocopy_send_client": false, 00:12:42.051 "zerocopy_threshold": 0, 00:12:42.051 "tls_version": 0, 00:12:42.051 "enable_ktls": false 00:12:42.051 } 00:12:42.051 } 00:12:42.051 ] 00:12:42.051 }, 00:12:42.051 { 00:12:42.051 "subsystem": "vmd", 00:12:42.051 "config": [] 00:12:42.051 }, 00:12:42.051 { 00:12:42.051 "subsystem": "accel", 00:12:42.051 "config": [ 00:12:42.051 { 00:12:42.051 "method": "accel_set_options", 00:12:42.051 "params": { 00:12:42.051 "small_cache_size": 128, 00:12:42.051 "large_cache_size": 16, 00:12:42.051 "task_count": 2048, 00:12:42.051 "sequence_count": 2048, 00:12:42.051 "buf_count": 2048 00:12:42.051 } 00:12:42.051 } 00:12:42.051 ] 00:12:42.051 }, 00:12:42.051 { 00:12:42.051 "subsystem": "bdev", 00:12:42.051 "config": [ 00:12:42.051 { 00:12:42.051 "method": "bdev_set_options", 00:12:42.051 "params": { 00:12:42.051 "bdev_io_pool_size": 65535, 00:12:42.051 "bdev_io_cache_size": 256, 00:12:42.051 "bdev_auto_examine": true, 00:12:42.051 "iobuf_small_cache_size": 128, 00:12:42.051 "iobuf_large_cache_size": 16 00:12:42.051 } 00:12:42.051 }, 00:12:42.051 { 00:12:42.051 "method": "bdev_raid_set_options", 00:12:42.051 "params": { 00:12:42.051 "process_window_size_kb": 1024, 00:12:42.051 "process_max_bandwidth_mb_sec": 0 00:12:42.051 } 00:12:42.051 }, 00:12:42.051 { 00:12:42.051 "method": "bdev_iscsi_set_options", 00:12:42.051 "params": { 00:12:42.051 "timeout_sec": 30 00:12:42.051 } 00:12:42.051 }, 00:12:42.051 { 00:12:42.051 "method": "bdev_nvme_set_options", 00:12:42.051 "params": { 00:12:42.051 "action_on_timeout": "none", 00:12:42.051 "timeout_us": 0, 00:12:42.051 "timeout_admin_us": 0, 00:12:42.051 "keep_alive_timeout_ms": 10000, 00:12:42.051 "arbitration_burst": 0, 00:12:42.051 "low_priority_weight": 0, 00:12:42.051 "medium_priority_weight": 0, 00:12:42.051 "high_priority_weight": 0, 00:12:42.051 "nvme_adminq_poll_period_us": 10000, 00:12:42.051 "nvme_ioq_poll_period_us": 0, 00:12:42.051 "io_queue_requests": 0, 00:12:42.051 "delay_cmd_submit": true, 00:12:42.051 "transport_retry_count": 4, 00:12:42.051 "bdev_retry_count": 3, 00:12:42.051 "transport_ack_timeout": 0, 00:12:42.051 "ctrlr_loss_timeout_sec": 0, 00:12:42.051 "reconnect_delay_sec": 0, 00:12:42.051 "fast_io_fail_timeout_sec": 0, 00:12:42.051 "disable_auto_failback": false, 00:12:42.051 "generate_uuids": false, 00:12:42.051 "transport_tos": 0, 00:12:42.051 "nvme_error_stat": false, 00:12:42.051 "rdma_srq_size": 0, 00:12:42.051 "io_path_stat": false, 00:12:42.051 "allow_accel_sequence": false, 00:12:42.051 "rdma_max_cq_size": 0, 00:12:42.051 "rdma_cm_event_timeout_ms": 0, 00:12:42.051 "dhchap_digests": [ 00:12:42.051 "sha256", 00:12:42.051 "sha384", 00:12:42.051 "sha512" 00:12:42.051 ], 00:12:42.051 "dhchap_dhgroups": [ 00:12:42.051 "null", 00:12:42.051 "ffdhe2048", 00:12:42.051 "ffdhe3072", 00:12:42.051 "ffdhe4096", 00:12:42.051 "ffdhe6144", 00:12:42.051 "ffdhe8192" 00:12:42.051 ] 00:12:42.051 } 00:12:42.051 }, 00:12:42.051 { 00:12:42.051 "method": "bdev_nvme_set_hotplug", 00:12:42.051 "params": { 00:12:42.051 "period_us": 100000, 00:12:42.051 "enable": false 00:12:42.051 } 00:12:42.051 }, 00:12:42.051 { 00:12:42.051 "method": "bdev_malloc_create", 00:12:42.051 "params": { 00:12:42.051 "name": "malloc0", 00:12:42.051 "num_blocks": 8192, 00:12:42.051 "block_size": 4096, 00:12:42.051 "physical_block_size": 4096, 00:12:42.051 "uuid": "3dfc0934-220a-43b2-8194-47d2aee508cd", 00:12:42.051 "optimal_io_boundary": 0, 00:12:42.051 "md_size": 0, 00:12:42.051 "dif_type": 0, 00:12:42.051 "dif_is_head_of_md": false, 00:12:42.051 "dif_pi_format": 0 00:12:42.051 } 00:12:42.051 }, 00:12:42.051 { 00:12:42.051 "method": "bdev_wait_for_examine" 00:12:42.051 } 00:12:42.051 ] 00:12:42.051 }, 00:12:42.051 { 00:12:42.051 "subsystem": "scsi", 00:12:42.051 "config": null 00:12:42.051 }, 00:12:42.051 { 00:12:42.051 "subsystem": "scheduler", 00:12:42.051 "config": [ 00:12:42.051 { 00:12:42.051 "method": "framework_set_scheduler", 00:12:42.051 "params": { 00:12:42.051 "name": "static" 00:12:42.051 } 00:12:42.051 } 00:12:42.051 ] 00:12:42.052 }, 00:12:42.052 { 00:12:42.052 "subsystem": "vhost_scsi", 00:12:42.052 "config": [] 00:12:42.052 }, 00:12:42.052 { 00:12:42.052 "subsystem": "vhost_blk", 00:12:42.052 "config": [] 00:12:42.052 }, 00:12:42.052 { 00:12:42.052 "subsystem": "ublk", 00:12:42.052 "config": [ 00:12:42.052 { 00:12:42.052 "method": "ublk_create_target", 00:12:42.052 "params": { 00:12:42.052 "cpumask": "1" 00:12:42.052 } 00:12:42.052 }, 00:12:42.052 { 00:12:42.052 "method": "ublk_start_disk", 00:12:42.052 "params": { 00:12:42.052 "bdev_name": "malloc0", 00:12:42.052 "ublk_id": 0, 00:12:42.052 "num_queues": 1, 00:12:42.052 "queue_depth": 128 00:12:42.052 } 00:12:42.052 } 00:12:42.052 ] 00:12:42.052 }, 00:12:42.052 { 00:12:42.052 "subsystem": "nbd", 00:12:42.052 "config": [] 00:12:42.052 }, 00:12:42.052 { 00:12:42.052 "subsystem": "nvmf", 00:12:42.052 "config": [ 00:12:42.052 { 00:12:42.052 "method": "nvmf_set_config", 00:12:42.052 "params": { 00:12:42.052 "discovery_filter": "match_any", 00:12:42.052 "admin_cmd_passthru": { 00:12:42.052 "identify_ctrlr": false 00:12:42.052 }, 00:12:42.052 "dhchap_digests": [ 00:12:42.052 "sha256", 00:12:42.052 "sha384", 00:12:42.052 "sha512" 00:12:42.052 ], 00:12:42.052 "dhchap_dhgroups": [ 00:12:42.052 "null", 00:12:42.052 "ffdhe2048", 00:12:42.052 "ffdhe3072", 00:12:42.052 "ffdhe4096", 00:12:42.052 "ffdhe6144", 00:12:42.052 "ffdhe8192" 00:12:42.052 ] 00:12:42.052 } 00:12:42.052 }, 00:12:42.052 { 00:12:42.052 "method": "nvmf_set_max_subsystems", 00:12:42.052 "params": { 00:12:42.052 "max_subsystems": 1024 00:12:42.052 } 00:12:42.052 }, 00:12:42.052 { 00:12:42.052 "method": "nvmf_set_crdt", 00:12:42.052 "params": { 00:12:42.052 "crdt1": 0, 00:12:42.052 "crdt2": 0, 00:12:42.052 "crdt3": 0 00:12:42.052 } 00:12:42.052 } 00:12:42.052 ] 00:12:42.052 }, 00:12:42.052 { 00:12:42.052 "subsystem": "iscsi", 00:12:42.052 "config": [ 00:12:42.052 { 00:12:42.052 "method": "iscsi_set_options", 00:12:42.052 "params": { 00:12:42.052 "node_base": "iqn.2016-06.io.spdk", 00:12:42.052 "max_sessions": 128, 00:12:42.052 "max_connections_per_session": 2, 00:12:42.052 "max_queue_depth": 64, 00:12:42.052 "default_time2wait": 2, 00:12:42.052 "default_time2retain": 20, 00:12:42.052 "first_burst_length": 8192, 00:12:42.052 "immediate_data": true, 00:12:42.052 "allow_duplicated_isid": false, 00:12:42.052 "error_recovery_level": 0, 00:12:42.052 "nop_timeout": 60, 00:12:42.052 "nop_in_interval": 30, 00:12:42.052 "disable_chap": false, 00:12:42.052 "require_chap": false, 00:12:42.052 "mutual_chap": false, 00:12:42.052 "chap_group": 0, 00:12:42.052 "max_large_datain_per_connection": 64, 00:12:42.052 "max_r2t_per_connection": 4, 00:12:42.052 "pdu_pool_size": 36864, 00:12:42.052 "immediate_data_pool_size": 16384, 00:12:42.052 "data_out_pool_size": 2048 00:12:42.052 } 00:12:42.052 } 00:12:42.052 ] 00:12:42.052 } 00:12:42.052 ] 00:12:42.052 }' 00:12:42.052 10:44:02 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 83718 00:12:42.052 10:44:02 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 83718 ']' 00:12:42.052 10:44:02 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 83718 00:12:42.052 10:44:02 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:12:42.052 10:44:02 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:42.052 10:44:02 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83718 00:12:42.052 killing process with pid 83718 00:12:42.052 10:44:02 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:42.052 10:44:02 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:42.052 10:44:02 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83718' 00:12:42.052 10:44:02 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 83718 00:12:42.052 10:44:02 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 83718 00:12:42.314 [2024-10-08 10:44:02.716288] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:12:42.314 [2024-10-08 10:44:02.744862] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:42.314 [2024-10-08 10:44:02.745002] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:12:42.314 [2024-10-08 10:44:02.752845] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:42.314 [2024-10-08 10:44:02.752916] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:12:42.314 [2024-10-08 10:44:02.752927] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:12:42.314 [2024-10-08 10:44:02.752950] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:12:42.314 [2024-10-08 10:44:02.753107] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:12:42.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.888 10:44:03 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=83756 00:12:42.888 10:44:03 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 83756 00:12:42.888 10:44:03 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 83756 ']' 00:12:42.888 10:44:03 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.888 10:44:03 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:42.888 10:44:03 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.888 10:44:03 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:42.888 10:44:03 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:12:42.888 10:44:03 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:12:42.888 10:44:03 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:12:42.888 "subsystems": [ 00:12:42.888 { 00:12:42.888 "subsystem": "fsdev", 00:12:42.888 "config": [ 00:12:42.888 { 00:12:42.888 "method": "fsdev_set_opts", 00:12:42.888 "params": { 00:12:42.888 "fsdev_io_pool_size": 65535, 00:12:42.888 "fsdev_io_cache_size": 256 00:12:42.888 } 00:12:42.888 } 00:12:42.888 ] 00:12:42.888 }, 00:12:42.888 { 00:12:42.888 "subsystem": "keyring", 00:12:42.888 "config": [] 00:12:42.888 }, 00:12:42.888 { 00:12:42.888 "subsystem": "iobuf", 00:12:42.888 "config": [ 00:12:42.888 { 00:12:42.888 "method": "iobuf_set_options", 00:12:42.888 "params": { 00:12:42.888 "small_pool_count": 8192, 00:12:42.888 "large_pool_count": 1024, 00:12:42.888 "small_bufsize": 8192, 00:12:42.888 "large_bufsize": 135168 00:12:42.888 } 00:12:42.888 } 00:12:42.888 ] 00:12:42.888 }, 00:12:42.888 { 00:12:42.888 "subsystem": "sock", 00:12:42.888 "config": [ 00:12:42.888 { 00:12:42.888 "method": "sock_set_default_impl", 00:12:42.888 "params": { 00:12:42.888 "impl_name": "posix" 00:12:42.888 } 00:12:42.888 }, 00:12:42.888 { 00:12:42.888 "method": "sock_impl_set_options", 00:12:42.888 "params": { 00:12:42.888 "impl_name": "ssl", 00:12:42.888 "recv_buf_size": 4096, 00:12:42.888 "send_buf_size": 4096, 00:12:42.888 "enable_recv_pipe": true, 00:12:42.888 "enable_quickack": false, 00:12:42.888 "enable_placement_id": 0, 00:12:42.888 "enable_zerocopy_send_server": true, 00:12:42.888 "enable_zerocopy_send_client": false, 00:12:42.888 "zerocopy_threshold": 0, 00:12:42.888 "tls_version": 0, 00:12:42.888 "enable_ktls": false 00:12:42.888 } 00:12:42.888 }, 00:12:42.888 { 00:12:42.888 "method": "sock_impl_set_options", 00:12:42.888 "params": { 00:12:42.888 "impl_name": "posix", 00:12:42.888 "recv_buf_size": 2097152, 00:12:42.888 "send_buf_size": 2097152, 00:12:42.888 "enable_recv_pipe": true, 00:12:42.888 "enable_quickack": false, 00:12:42.888 "enable_placement_id": 0, 00:12:42.888 "enable_zerocopy_send_server": true, 00:12:42.888 "enable_zerocopy_send_client": false, 00:12:42.888 "zerocopy_threshold": 0, 00:12:42.888 "tls_version": 0, 00:12:42.888 "enable_ktls": false 00:12:42.888 } 00:12:42.888 } 00:12:42.888 ] 00:12:42.888 }, 00:12:42.888 { 00:12:42.888 "subsystem": "vmd", 00:12:42.888 "config": [] 00:12:42.888 }, 00:12:42.888 { 00:12:42.888 "subsystem": "accel", 00:12:42.888 "config": [ 00:12:42.888 { 00:12:42.888 "method": "accel_set_options", 00:12:42.888 "params": { 00:12:42.888 "small_cache_size": 128, 00:12:42.888 "large_cache_size": 16, 00:12:42.888 "task_count": 2048, 00:12:42.888 "sequence_count": 2048, 00:12:42.888 "buf_count": 2048 00:12:42.888 } 00:12:42.888 } 00:12:42.888 ] 00:12:42.888 }, 00:12:42.888 { 00:12:42.888 "subsystem": "bdev", 00:12:42.888 "config": [ 00:12:42.888 { 00:12:42.888 "method": "bdev_set_options", 00:12:42.888 "params": { 00:12:42.888 "bdev_io_pool_size": 65535, 00:12:42.888 "bdev_io_cache_size": 256, 00:12:42.888 "bdev_auto_examine": true, 00:12:42.888 "iobuf_small_cache_size": 128, 00:12:42.888 "iobuf_large_cache_size": 16 00:12:42.888 } 00:12:42.888 }, 00:12:42.888 { 00:12:42.888 "method": "bdev_raid_set_options", 00:12:42.888 "params": { 00:12:42.888 "process_window_size_kb": 1024, 00:12:42.888 "process_max_bandwidth_mb_sec": 0 00:12:42.888 } 00:12:42.888 }, 00:12:42.888 { 00:12:42.888 "method": "bdev_iscsi_set_options", 00:12:42.888 "params": { 00:12:42.888 "timeout_sec": 30 00:12:42.888 } 00:12:42.888 }, 00:12:42.888 { 00:12:42.888 "method": "bdev_nvme_set_options", 00:12:42.888 "params": { 00:12:42.888 "action_on_timeout": "none", 00:12:42.889 "timeout_us": 0, 00:12:42.889 "timeout_admin_us": 0, 00:12:42.889 "keep_alive_timeout_ms": 10000, 00:12:42.889 "arbitration_burst": 0, 00:12:42.889 "low_priority_weight": 0, 00:12:42.889 "medium_priority_weight": 0, 00:12:42.889 "high_priority_weight": 0, 00:12:42.889 "nvme_adminq_poll_period_us": 10000, 00:12:42.889 "nvme_ioq_poll_period_us": 0, 00:12:42.889 "io_queue_requests": 0, 00:12:42.889 "delay_cmd_submit": true, 00:12:42.889 "transport_retry_count": 4, 00:12:42.889 "bdev_retry_count": 3, 00:12:42.889 "transport_ack_timeout": 0, 00:12:42.889 "ctrlr_loss_timeout_sec": 0, 00:12:42.889 "reconnect_delay_sec": 0, 00:12:42.889 "fast_io_fail_timeout_sec": 0, 00:12:42.889 "disable_auto_failback": false, 00:12:42.889 "generate_uuids": false, 00:12:42.889 "transport_tos": 0, 00:12:42.889 "nvme_error_stat": false, 00:12:42.889 "rdma_srq_size": 0, 00:12:42.889 "io_path_stat": false, 00:12:42.889 "allow_accel_sequence": false, 00:12:42.889 "rdma_max_cq_size": 0, 00:12:42.889 "rdma_cm_event_timeout_ms": 0, 00:12:42.889 "dhchap_digests": [ 00:12:42.889 "sha256", 00:12:42.889 "sha384", 00:12:42.889 "sha512" 00:12:42.889 ], 00:12:42.889 "dhchap_dhgroups": [ 00:12:42.889 "null", 00:12:42.889 "ffdhe2048", 00:12:42.889 "ffdhe3072", 00:12:42.889 "ffdhe4096", 00:12:42.889 "ffdhe6144", 00:12:42.889 "ffdhe8192" 00:12:42.889 ] 00:12:42.889 } 00:12:42.889 }, 00:12:42.889 { 00:12:42.889 "method": "bdev_nvme_set_hotplug", 00:12:42.889 "params": { 00:12:42.889 "period_us": 100000, 00:12:42.889 "enable": false 00:12:42.889 } 00:12:42.889 }, 00:12:42.889 { 00:12:42.889 "method": "bdev_malloc_create", 00:12:42.889 "params": { 00:12:42.889 "name": "malloc0", 00:12:42.889 "num_blocks": 8192, 00:12:42.889 "block_size": 4096, 00:12:42.889 "physical_block_size": 4096, 00:12:42.889 "uuid": "3dfc0934-220a-43b2-8194-47d2aee508cd", 00:12:42.889 "optimal_io_boundary": 0, 00:12:42.889 "md_size": 0, 00:12:42.889 "dif_type": 0, 00:12:42.889 "dif_is_head_of_md": false, 00:12:42.889 "dif_pi_format": 0 00:12:42.889 } 00:12:42.889 }, 00:12:42.889 { 00:12:42.889 "method": "bdev_wait_for_examine" 00:12:42.889 } 00:12:42.889 ] 00:12:42.889 }, 00:12:42.889 { 00:12:42.889 "subsystem": "scsi", 00:12:42.889 "config": null 00:12:42.889 }, 00:12:42.889 { 00:12:42.889 "subsystem": "scheduler", 00:12:42.889 "config": [ 00:12:42.889 { 00:12:42.889 "method": "framework_set_scheduler", 00:12:42.889 "params": { 00:12:42.889 "name": "static" 00:12:42.889 } 00:12:42.889 } 00:12:42.889 ] 00:12:42.889 }, 00:12:42.889 { 00:12:42.889 "subsystem": "vhost_scsi", 00:12:42.889 "config": [] 00:12:42.889 }, 00:12:42.889 { 00:12:42.889 "subsystem": "vhost_blk", 00:12:42.889 "config": [] 00:12:42.889 }, 00:12:42.889 { 00:12:42.889 "subsystem": "ublk", 00:12:42.889 "config": [ 00:12:42.889 { 00:12:42.889 "method": "ublk_create_target", 00:12:42.889 "params": { 00:12:42.889 "cpumask": "1" 00:12:42.889 } 00:12:42.889 }, 00:12:42.889 { 00:12:42.889 "method": "ublk_start_disk", 00:12:42.889 "params": { 00:12:42.889 "bdev_name": "malloc0", 00:12:42.889 "ublk_id": 0, 00:12:42.889 "num_queues": 1, 00:12:42.889 "queue_depth": 128 00:12:42.889 } 00:12:42.889 } 00:12:42.889 ] 00:12:42.889 }, 00:12:42.889 { 00:12:42.889 "subsystem": "nbd", 00:12:42.889 "config": [] 00:12:42.889 }, 00:12:42.889 { 00:12:42.889 "subsystem": "nvmf", 00:12:42.889 "config": [ 00:12:42.889 { 00:12:42.889 "method": "nvmf_set_config", 00:12:42.889 "params": { 00:12:42.889 "discovery_filter": "match_any", 00:12:42.889 "admin_cmd_passthru": { 00:12:42.889 "identify_ctrlr": false 00:12:42.889 }, 00:12:42.889 "dhchap_digests": [ 00:12:42.889 "sha256", 00:12:42.889 "sha384", 00:12:42.889 "sha512" 00:12:42.889 ], 00:12:42.889 "dhchap_dhgroups": [ 00:12:42.889 "null", 00:12:42.889 "ffdhe2048", 00:12:42.889 "ffdhe3072", 00:12:42.889 "ffdhe4096", 00:12:42.889 "ffdhe6144", 00:12:42.889 "ffdhe8192" 00:12:42.889 ] 00:12:42.889 } 00:12:42.889 }, 00:12:42.889 { 00:12:42.889 "method": "nvmf_set_max_subsystems", 00:12:42.889 "params": { 00:12:42.889 "max_subsystems": 1024 00:12:42.889 } 00:12:42.889 }, 00:12:42.889 { 00:12:42.889 "method": "nvmf_set_crdt", 00:12:42.889 "params": { 00:12:42.889 "crdt1": 0, 00:12:42.889 "crdt2": 0, 00:12:42.889 "crdt3": 0 00:12:42.889 } 00:12:42.889 } 00:12:42.889 ] 00:12:42.889 }, 00:12:42.889 { 00:12:42.889 "subsystem": "iscsi", 00:12:42.889 "config": [ 00:12:42.889 { 00:12:42.889 "method": "iscsi_set_options", 00:12:42.889 "params": { 00:12:42.889 "node_base": "iqn.2016-06.io.spdk", 00:12:42.889 "max_sessions": 128, 00:12:42.889 "max_connections_per_session": 2, 00:12:42.889 "max_queue_depth": 64, 00:12:42.889 "default_time2wait": 2, 00:12:42.889 "default_time2retain": 20, 00:12:42.889 "first_burst_length": 8192, 00:12:42.889 "immediate_data": true, 00:12:42.889 "allow_duplicated_isid": false, 00:12:42.889 "error_recovery_level": 0, 00:12:42.889 "nop_timeout": 60, 00:12:42.889 "nop_in_interval": 30, 00:12:42.889 "disable_chap": false, 00:12:42.889 "require_chap": false, 00:12:42.889 "mutual_chap": false, 00:12:42.889 "chap_group": 0, 00:12:42.889 "max_large_datain_per_connection": 64, 00:12:42.889 "max_r2t_per_connection": 4, 00:12:42.889 "pdu_pool_size": 36864, 00:12:42.889 "immediate_data_pool_size": 16384, 00:12:42.889 "data_out_pool_size": 2048 00:12:42.889 } 00:12:42.889 } 00:12:42.889 ] 00:12:42.889 } 00:12:42.889 ] 00:12:42.889 }' 00:12:42.889 [2024-10-08 10:44:03.349300] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:12:42.889 [2024-10-08 10:44:03.349726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83756 ] 00:12:43.151 [2024-10-08 10:44:03.485615] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:43.151 [2024-10-08 10:44:03.503074] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.151 [2024-10-08 10:44:03.564629] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.412 [2024-10-08 10:44:03.921816] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:12:43.412 [2024-10-08 10:44:03.922129] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:12:43.412 [2024-10-08 10:44:03.929960] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:12:43.412 [2024-10-08 10:44:03.930048] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:12:43.412 [2024-10-08 10:44:03.930060] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:12:43.412 [2024-10-08 10:44:03.930070] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:12:43.412 [2024-10-08 10:44:03.938916] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:43.412 [2024-10-08 10:44:03.938943] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:43.412 [2024-10-08 10:44:03.945831] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:43.412 [2024-10-08 10:44:03.945977] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:12:43.412 [2024-10-08 10:44:03.962829] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:12:43.674 10:44:04 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:43.674 10:44:04 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:12:43.674 10:44:04 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:12:43.674 10:44:04 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.674 10:44:04 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:12:43.674 10:44:04 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:12:43.674 10:44:04 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.674 10:44:04 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:12:43.674 10:44:04 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:12:43.674 10:44:04 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 83756 00:12:43.674 10:44:04 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 83756 ']' 00:12:43.674 10:44:04 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 83756 00:12:43.936 10:44:04 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:12:43.936 10:44:04 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:43.936 10:44:04 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83756 00:12:43.936 killing process with pid 83756 00:12:43.936 10:44:04 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:43.936 10:44:04 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:43.936 10:44:04 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83756' 00:12:43.936 10:44:04 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 83756 00:12:43.936 10:44:04 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 83756 00:12:44.197 [2024-10-08 10:44:04.573138] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:12:44.197 [2024-10-08 10:44:04.609847] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:44.197 [2024-10-08 10:44:04.609996] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:12:44.197 [2024-10-08 10:44:04.617840] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:44.197 [2024-10-08 10:44:04.617906] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:12:44.197 [2024-10-08 10:44:04.617923] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:12:44.197 [2024-10-08 10:44:04.617953] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:12:44.197 [2024-10-08 10:44:04.618110] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:12:44.769 10:44:05 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:12:44.769 ************************************ 00:12:44.769 END TEST test_save_ublk_config 00:12:44.769 ************************************ 00:12:44.769 00:12:44.769 real 0m4.038s 00:12:44.769 user 0m2.707s 00:12:44.769 sys 0m1.999s 00:12:44.769 10:44:05 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:44.769 10:44:05 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:12:44.769 10:44:05 ublk -- ublk/ublk.sh@139 -- # spdk_pid=83812 00:12:44.769 10:44:05 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:44.769 10:44:05 ublk -- ublk/ublk.sh@141 -- # waitforlisten 83812 00:12:44.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.769 10:44:05 ublk -- common/autotest_common.sh@831 -- # '[' -z 83812 ']' 00:12:44.769 10:44:05 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:12:44.769 10:44:05 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.769 10:44:05 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:44.769 10:44:05 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.769 10:44:05 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:44.769 10:44:05 ublk -- common/autotest_common.sh@10 -- # set +x 00:12:44.769 [2024-10-08 10:44:05.275636] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:12:44.769 [2024-10-08 10:44:05.275786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83812 ] 00:12:45.030 [2024-10-08 10:44:05.416043] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:45.030 [2024-10-08 10:44:05.435053] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:45.030 [2024-10-08 10:44:05.488084] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.030 [2024-10-08 10:44:05.488152] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.607 10:44:06 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:45.607 10:44:06 ublk -- common/autotest_common.sh@864 -- # return 0 00:12:45.607 10:44:06 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:12:45.607 10:44:06 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:45.607 10:44:06 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:45.607 10:44:06 ublk -- common/autotest_common.sh@10 -- # set +x 00:12:45.607 ************************************ 00:12:45.607 START TEST test_create_ublk 00:12:45.607 ************************************ 00:12:45.607 10:44:06 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:12:45.607 10:44:06 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:12:45.607 10:44:06 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.607 10:44:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:12:45.607 [2024-10-08 10:44:06.155826] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:12:45.607 [2024-10-08 10:44:06.157654] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:12:45.607 10:44:06 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.607 10:44:06 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:12:45.607 10:44:06 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:12:45.607 10:44:06 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.607 10:44:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:12:45.890 10:44:06 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.890 10:44:06 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:12:45.890 10:44:06 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:12:45.890 10:44:06 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.890 10:44:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:12:45.890 [2024-10-08 10:44:06.258010] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:12:45.890 [2024-10-08 10:44:06.258495] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:12:45.890 [2024-10-08 10:44:06.258520] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:12:45.890 [2024-10-08 10:44:06.258537] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:12:45.890 [2024-10-08 10:44:06.267163] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:45.890 [2024-10-08 10:44:06.267207] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:45.890 [2024-10-08 10:44:06.273853] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:45.890 [2024-10-08 10:44:06.274571] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:12:45.890 [2024-10-08 10:44:06.289955] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:12:45.890 10:44:06 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.890 10:44:06 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:12:45.890 10:44:06 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:12:45.890 10:44:06 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:12:45.890 10:44:06 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.890 10:44:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:12:45.890 10:44:06 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.890 10:44:06 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:12:45.890 { 00:12:45.890 "ublk_device": "/dev/ublkb0", 00:12:45.890 "id": 0, 00:12:45.890 "queue_depth": 512, 00:12:45.890 "num_queues": 4, 00:12:45.890 "bdev_name": "Malloc0" 00:12:45.890 } 00:12:45.890 ]' 00:12:45.890 10:44:06 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:12:45.890 10:44:06 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:12:45.890 10:44:06 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:12:45.890 10:44:06 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:12:45.890 10:44:06 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:12:45.890 10:44:06 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:12:45.890 10:44:06 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:12:45.890 10:44:06 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:12:45.890 10:44:06 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:12:46.158 10:44:06 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:12:46.158 10:44:06 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:12:46.158 10:44:06 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:12:46.158 10:44:06 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:12:46.158 10:44:06 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:12:46.158 10:44:06 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:12:46.158 10:44:06 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:12:46.158 10:44:06 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:12:46.158 10:44:06 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:12:46.158 10:44:06 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:12:46.158 10:44:06 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:12:46.158 10:44:06 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:12:46.158 10:44:06 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:12:46.158 fio: verification read phase will never start because write phase uses all of runtime 00:12:46.158 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:12:46.158 fio-3.35 00:12:46.158 Starting 1 process 00:12:58.352 00:12:58.352 fio_test: (groupid=0, jobs=1): err= 0: pid=83851: Tue Oct 8 10:44:16 2024 00:12:58.352 write: IOPS=14.6k, BW=57.1MiB/s (59.9MB/s)(571MiB/10001msec); 0 zone resets 00:12:58.352 clat (usec): min=48, max=8465, avg=67.66, stdev=121.66 00:12:58.352 lat (usec): min=48, max=8483, avg=68.10, stdev=121.69 00:12:58.352 clat percentiles (usec): 00:12:58.352 | 1.00th=[ 53], 5.00th=[ 56], 10.00th=[ 57], 20.00th=[ 58], 00:12:58.352 | 30.00th=[ 60], 40.00th=[ 61], 50.00th=[ 62], 60.00th=[ 63], 00:12:58.352 | 70.00th=[ 65], 80.00th=[ 67], 90.00th=[ 70], 95.00th=[ 75], 00:12:58.352 | 99.00th=[ 91], 99.50th=[ 115], 99.90th=[ 2606], 99.95th=[ 3458], 00:12:58.352 | 99.99th=[ 4047] 00:12:58.352 bw ( KiB/s): min=20608, max=61744, per=99.93%, avg=58411.37, stdev=9292.84, samples=19 00:12:58.352 iops : min= 5152, max=15436, avg=14602.84, stdev=2323.21, samples=19 00:12:58.352 lat (usec) : 50=0.02%, 100=99.34%, 250=0.36%, 500=0.10%, 750=0.01% 00:12:58.352 lat (usec) : 1000=0.01% 00:12:58.352 lat (msec) : 2=0.04%, 4=0.11%, 10=0.01% 00:12:58.352 cpu : usr=2.51%, sys=11.70%, ctx=146141, majf=0, minf=796 00:12:58.352 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:58.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.352 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:58.352 issued rwts: total=0,146141,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:58.352 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:58.352 00:12:58.352 Run status group 0 (all jobs): 00:12:58.352 WRITE: bw=57.1MiB/s (59.9MB/s), 57.1MiB/s-57.1MiB/s (59.9MB/s-59.9MB/s), io=571MiB (599MB), run=10001-10001msec 00:12:58.352 00:12:58.352 Disk stats (read/write): 00:12:58.352 ublkb0: ios=0/144570, merge=0/0, ticks=0/8532, in_queue=8532, util=99.07% 00:12:58.352 10:44:16 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:12:58.352 10:44:16 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.352 10:44:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:12:58.352 [2024-10-08 10:44:16.716195] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:12:58.352 [2024-10-08 10:44:16.752326] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:58.352 [2024-10-08 10:44:16.753220] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:12:58.352 [2024-10-08 10:44:16.761825] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:58.352 [2024-10-08 10:44:16.762095] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:12:58.352 [2024-10-08 10:44:16.762110] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:12:58.352 10:44:16 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.352 10:44:16 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:12:58.352 10:44:16 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:12:58.352 10:44:16 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:12:58.352 10:44:16 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:58.352 10:44:16 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:58.352 10:44:16 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:58.353 10:44:16 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:58.353 10:44:16 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:12:58.353 10:44:16 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.353 10:44:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:12:58.353 [2024-10-08 10:44:16.769894] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:12:58.353 request: 00:12:58.353 { 00:12:58.353 "ublk_id": 0, 00:12:58.353 "method": "ublk_stop_disk", 00:12:58.353 "req_id": 1 00:12:58.353 } 00:12:58.353 Got JSON-RPC error response 00:12:58.353 response: 00:12:58.353 { 00:12:58.353 "code": -19, 00:12:58.353 "message": "No such device" 00:12:58.353 } 00:12:58.353 10:44:16 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:58.353 10:44:16 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:12:58.353 10:44:16 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:58.353 10:44:16 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:58.353 10:44:16 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:58.353 10:44:16 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:12:58.353 10:44:16 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.353 10:44:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:12:58.353 [2024-10-08 10:44:16.785883] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:12:58.353 [2024-10-08 10:44:16.787237] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:12:58.353 [2024-10-08 10:44:16.787266] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:12:58.353 10:44:16 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.353 10:44:16 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:12:58.353 10:44:16 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.353 10:44:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:12:58.353 10:44:16 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.353 10:44:16 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:12:58.353 10:44:16 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:12:58.353 10:44:16 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.353 10:44:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:12:58.353 10:44:16 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.353 10:44:16 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:12:58.353 10:44:16 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:12:58.353 10:44:16 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:12:58.353 10:44:16 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:12:58.353 10:44:16 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.353 10:44:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:12:58.353 10:44:16 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.353 10:44:16 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:12:58.353 10:44:16 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:12:58.353 ************************************ 00:12:58.353 END TEST test_create_ublk 00:12:58.353 ************************************ 00:12:58.353 10:44:16 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:12:58.353 00:12:58.353 real 0m10.796s 00:12:58.353 user 0m0.549s 00:12:58.353 sys 0m1.252s 00:12:58.353 10:44:16 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:58.353 10:44:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:12:58.353 10:44:16 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:12:58.353 10:44:16 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:58.353 10:44:16 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:58.353 10:44:16 ublk -- common/autotest_common.sh@10 -- # set +x 00:12:58.353 ************************************ 00:12:58.353 START TEST test_create_multi_ublk 00:12:58.353 ************************************ 00:12:58.353 10:44:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:12:58.353 10:44:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:12:58.353 10:44:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.353 10:44:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:12:58.353 [2024-10-08 10:44:16.989810] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:12:58.353 [2024-10-08 10:44:16.990707] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:12:58.353 10:44:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.353 10:44:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:12:58.353 10:44:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:12:58.353 10:44:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:58.353 10:44:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:12:58.353 10:44:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.353 10:44:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:12:58.353 10:44:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.353 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:12:58.353 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:12:58.353 10:44:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.353 10:44:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:12:58.353 [2024-10-08 10:44:17.061930] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:12:58.353 [2024-10-08 10:44:17.062226] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:12:58.353 [2024-10-08 10:44:17.062238] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:12:58.353 [2024-10-08 10:44:17.062251] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:12:58.353 [2024-10-08 10:44:17.085816] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:58.353 [2024-10-08 10:44:17.085837] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:58.353 [2024-10-08 10:44:17.097819] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:58.353 [2024-10-08 10:44:17.098301] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:12:58.353 [2024-10-08 10:44:17.137821] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:12:58.353 10:44:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.353 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:12:58.353 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:58.353 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:12:58.353 10:44:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.353 10:44:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:12:58.353 10:44:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.353 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:12:58.353 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:12:58.354 [2024-10-08 10:44:17.221921] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:12:58.354 [2024-10-08 10:44:17.222212] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:12:58.354 [2024-10-08 10:44:17.222225] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:12:58.354 [2024-10-08 10:44:17.222230] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:12:58.354 [2024-10-08 10:44:17.233829] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:58.354 [2024-10-08 10:44:17.233845] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:58.354 [2024-10-08 10:44:17.245828] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:58.354 [2024-10-08 10:44:17.246298] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:12:58.354 [2024-10-08 10:44:17.281828] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:12:58.354 [2024-10-08 10:44:17.365956] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:12:58.354 [2024-10-08 10:44:17.366245] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:12:58.354 [2024-10-08 10:44:17.366256] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:12:58.354 [2024-10-08 10:44:17.366262] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:12:58.354 [2024-10-08 10:44:17.377822] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:58.354 [2024-10-08 10:44:17.377841] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:58.354 [2024-10-08 10:44:17.389817] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:58.354 [2024-10-08 10:44:17.390297] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:12:58.354 [2024-10-08 10:44:17.425823] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:12:58.354 [2024-10-08 10:44:17.509922] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:12:58.354 [2024-10-08 10:44:17.510213] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:12:58.354 [2024-10-08 10:44:17.510226] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:12:58.354 [2024-10-08 10:44:17.510231] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:12:58.354 [2024-10-08 10:44:17.521828] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:12:58.354 [2024-10-08 10:44:17.521843] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:12:58.354 [2024-10-08 10:44:17.533831] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:12:58.354 [2024-10-08 10:44:17.534303] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:12:58.354 [2024-10-08 10:44:17.569834] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:12:58.354 { 00:12:58.354 "ublk_device": "/dev/ublkb0", 00:12:58.354 "id": 0, 00:12:58.354 "queue_depth": 512, 00:12:58.354 "num_queues": 4, 00:12:58.354 "bdev_name": "Malloc0" 00:12:58.354 }, 00:12:58.354 { 00:12:58.354 "ublk_device": "/dev/ublkb1", 00:12:58.354 "id": 1, 00:12:58.354 "queue_depth": 512, 00:12:58.354 "num_queues": 4, 00:12:58.354 "bdev_name": "Malloc1" 00:12:58.354 }, 00:12:58.354 { 00:12:58.354 "ublk_device": "/dev/ublkb2", 00:12:58.354 "id": 2, 00:12:58.354 "queue_depth": 512, 00:12:58.354 "num_queues": 4, 00:12:58.354 "bdev_name": "Malloc2" 00:12:58.354 }, 00:12:58.354 { 00:12:58.354 "ublk_device": "/dev/ublkb3", 00:12:58.354 "id": 3, 00:12:58.354 "queue_depth": 512, 00:12:58.354 "num_queues": 4, 00:12:58.354 "bdev_name": "Malloc3" 00:12:58.354 } 00:12:58.354 ]' 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:12:58.354 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:12:58.355 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:12:58.355 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:12:58.355 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:58.355 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:12:58.355 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:12:58.355 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:12:58.355 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:12:58.355 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:12:58.355 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:12:58.355 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:12:58.355 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:12:58.355 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:12:58.355 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:12:58.355 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:58.355 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:12:58.355 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:12:58.355 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:12:58.355 10:44:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:12:58.355 [2024-10-08 10:44:18.270890] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:12:58.355 [2024-10-08 10:44:18.304344] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:58.355 [2024-10-08 10:44:18.305489] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:12:58.355 [2024-10-08 10:44:18.310830] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:58.355 [2024-10-08 10:44:18.311071] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:12:58.355 [2024-10-08 10:44:18.311085] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:12:58.355 [2024-10-08 10:44:18.324875] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:12:58.355 [2024-10-08 10:44:18.366217] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:58.355 [2024-10-08 10:44:18.367380] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:12:58.355 [2024-10-08 10:44:18.373822] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:58.355 [2024-10-08 10:44:18.374059] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:12:58.355 [2024-10-08 10:44:18.374074] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:12:58.355 [2024-10-08 10:44:18.389884] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:12:58.355 [2024-10-08 10:44:18.437857] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:58.355 [2024-10-08 10:44:18.438584] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:12:58.355 [2024-10-08 10:44:18.445843] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:58.355 [2024-10-08 10:44:18.446066] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:12:58.355 [2024-10-08 10:44:18.446079] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:12:58.355 [2024-10-08 10:44:18.461879] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:12:58.355 [2024-10-08 10:44:18.505855] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:12:58.355 [2024-10-08 10:44:18.506521] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:12:58.355 [2024-10-08 10:44:18.509956] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:12:58.355 [2024-10-08 10:44:18.510175] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:12:58.355 [2024-10-08 10:44:18.510183] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.355 10:44:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:12:58.355 [2024-10-08 10:44:18.712872] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:12:58.355 [2024-10-08 10:44:18.714189] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:12:58.356 [2024-10-08 10:44:18.714216] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:12:58.356 10:44:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:12:58.356 10:44:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:58.356 10:44:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:12:58.356 10:44:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.356 10:44:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:12:58.356 10:44:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.356 10:44:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:58.356 10:44:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:12:58.356 10:44:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.356 10:44:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:12:58.356 10:44:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.356 10:44:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:58.356 10:44:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:12:58.356 10:44:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.356 10:44:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:12:58.356 10:44:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.356 10:44:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:12:58.356 10:44:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:12:58.356 10:44:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.356 10:44:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:12:58.614 10:44:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.614 10:44:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:12:58.614 10:44:18 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:12:58.614 10:44:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.614 10:44:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:12:58.614 10:44:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.614 10:44:18 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:12:58.614 10:44:18 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:12:58.614 10:44:19 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:12:58.614 10:44:19 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:12:58.614 10:44:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.614 10:44:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:12:58.614 10:44:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.614 10:44:19 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:12:58.614 10:44:19 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:12:58.614 ************************************ 00:12:58.614 END TEST test_create_multi_ublk 00:12:58.614 ************************************ 00:12:58.614 10:44:19 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:12:58.614 00:12:58.614 real 0m2.073s 00:12:58.614 user 0m0.852s 00:12:58.614 sys 0m0.126s 00:12:58.614 10:44:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:58.614 10:44:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:12:58.614 10:44:19 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:12:58.615 10:44:19 ublk -- ublk/ublk.sh@147 -- # cleanup 00:12:58.615 10:44:19 ublk -- ublk/ublk.sh@130 -- # killprocess 83812 00:12:58.615 10:44:19 ublk -- common/autotest_common.sh@950 -- # '[' -z 83812 ']' 00:12:58.615 10:44:19 ublk -- common/autotest_common.sh@954 -- # kill -0 83812 00:12:58.615 10:44:19 ublk -- common/autotest_common.sh@955 -- # uname 00:12:58.615 10:44:19 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:58.615 10:44:19 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83812 00:12:58.615 killing process with pid 83812 00:12:58.615 10:44:19 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:58.615 10:44:19 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:58.615 10:44:19 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83812' 00:12:58.615 10:44:19 ublk -- common/autotest_common.sh@969 -- # kill 83812 00:12:58.615 10:44:19 ublk -- common/autotest_common.sh@974 -- # wait 83812 00:12:58.873 [2024-10-08 10:44:19.265948] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:12:58.873 [2024-10-08 10:44:19.266011] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:12:59.133 00:12:59.133 real 0m18.634s 00:12:59.133 user 0m28.626s 00:12:59.133 sys 0m7.597s 00:12:59.133 10:44:19 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:59.133 ************************************ 00:12:59.133 10:44:19 ublk -- common/autotest_common.sh@10 -- # set +x 00:12:59.133 END TEST ublk 00:12:59.133 ************************************ 00:12:59.133 10:44:19 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:12:59.133 10:44:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:59.133 10:44:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:59.133 10:44:19 -- common/autotest_common.sh@10 -- # set +x 00:12:59.133 ************************************ 00:12:59.133 START TEST ublk_recovery 00:12:59.133 ************************************ 00:12:59.133 10:44:19 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:12:59.133 * Looking for test storage... 00:12:59.133 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:12:59.133 10:44:19 ublk_recovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:59.133 10:44:19 ublk_recovery -- common/autotest_common.sh@1681 -- # lcov --version 00:12:59.133 10:44:19 ublk_recovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:59.394 10:44:19 ublk_recovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:59.394 10:44:19 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:59.394 10:44:19 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:59.394 10:44:19 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:59.394 10:44:19 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:59.394 10:44:19 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:59.394 10:44:19 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:59.394 10:44:19 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:59.394 10:44:19 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:59.394 10:44:19 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:59.394 10:44:19 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:59.394 10:44:19 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:59.394 10:44:19 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:12:59.394 10:44:19 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:12:59.394 10:44:19 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:59.394 10:44:19 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:59.394 10:44:19 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:12:59.394 10:44:19 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:12:59.394 10:44:19 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:59.394 10:44:19 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:12:59.394 10:44:19 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:59.394 10:44:19 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:12:59.394 10:44:19 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:12:59.394 10:44:19 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:59.394 10:44:19 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:12:59.394 10:44:19 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:59.394 10:44:19 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:59.394 10:44:19 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:59.394 10:44:19 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:12:59.394 10:44:19 ublk_recovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:59.394 10:44:19 ublk_recovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:59.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.394 --rc genhtml_branch_coverage=1 00:12:59.394 --rc genhtml_function_coverage=1 00:12:59.394 --rc genhtml_legend=1 00:12:59.394 --rc geninfo_all_blocks=1 00:12:59.394 --rc geninfo_unexecuted_blocks=1 00:12:59.394 00:12:59.394 ' 00:12:59.394 10:44:19 ublk_recovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:59.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.394 --rc genhtml_branch_coverage=1 00:12:59.394 --rc genhtml_function_coverage=1 00:12:59.394 --rc genhtml_legend=1 00:12:59.394 --rc geninfo_all_blocks=1 00:12:59.394 --rc geninfo_unexecuted_blocks=1 00:12:59.394 00:12:59.394 ' 00:12:59.394 10:44:19 ublk_recovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:59.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.394 --rc genhtml_branch_coverage=1 00:12:59.394 --rc genhtml_function_coverage=1 00:12:59.394 --rc genhtml_legend=1 00:12:59.394 --rc geninfo_all_blocks=1 00:12:59.394 --rc geninfo_unexecuted_blocks=1 00:12:59.394 00:12:59.394 ' 00:12:59.394 10:44:19 ublk_recovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:59.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.394 --rc genhtml_branch_coverage=1 00:12:59.394 --rc genhtml_function_coverage=1 00:12:59.394 --rc genhtml_legend=1 00:12:59.394 --rc geninfo_all_blocks=1 00:12:59.394 --rc geninfo_unexecuted_blocks=1 00:12:59.394 00:12:59.394 ' 00:12:59.394 10:44:19 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:12:59.394 10:44:19 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:12:59.394 10:44:19 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:12:59.394 10:44:19 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:12:59.394 10:44:19 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:12:59.394 10:44:19 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:12:59.394 10:44:19 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:12:59.394 10:44:19 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:12:59.394 10:44:19 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:12:59.394 10:44:19 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:12:59.394 10:44:19 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=84175 00:12:59.394 10:44:19 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:59.394 10:44:19 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:12:59.394 10:44:19 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 84175 00:12:59.394 10:44:19 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 84175 ']' 00:12:59.394 10:44:19 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.394 10:44:19 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:59.394 10:44:19 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.394 10:44:19 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:59.394 10:44:19 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:12:59.394 [2024-10-08 10:44:19.862521] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:12:59.394 [2024-10-08 10:44:19.862869] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84175 ] 00:12:59.655 [2024-10-08 10:44:20.000427] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:59.655 [2024-10-08 10:44:20.019332] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:59.655 [2024-10-08 10:44:20.073002] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.655 [2024-10-08 10:44:20.073028] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:00.225 10:44:20 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:00.225 10:44:20 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:13:00.225 10:44:20 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:13:00.225 10:44:20 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.225 10:44:20 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:00.225 [2024-10-08 10:44:20.708824] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:00.225 [2024-10-08 10:44:20.710583] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:00.225 10:44:20 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.225 10:44:20 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:00.225 10:44:20 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.225 10:44:20 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:00.225 malloc0 00:13:00.225 10:44:20 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.225 10:44:20 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:13:00.225 10:44:20 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.225 10:44:20 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:00.225 [2024-10-08 10:44:20.756971] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:13:00.225 [2024-10-08 10:44:20.757078] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:13:00.225 [2024-10-08 10:44:20.757089] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:00.225 [2024-10-08 10:44:20.757095] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:00.225 [2024-10-08 10:44:20.765023] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:00.225 [2024-10-08 10:44:20.765045] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:00.225 [2024-10-08 10:44:20.772841] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:00.225 [2024-10-08 10:44:20.772970] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:00.225 [2024-10-08 10:44:20.789841] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:00.225 1 00:13:00.225 10:44:20 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.225 10:44:20 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:13:01.600 10:44:21 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=84208 00:13:01.600 10:44:21 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:13:01.600 10:44:21 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:13:01.600 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:01.600 fio-3.35 00:13:01.600 Starting 1 process 00:13:06.867 10:44:26 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 84175 00:13:06.867 10:44:26 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:13:12.154 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 84175 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:13:12.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.154 10:44:31 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=84319 00:13:12.154 10:44:31 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:12.154 10:44:31 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 84319 00:13:12.154 10:44:31 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 84319 ']' 00:13:12.154 10:44:31 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.154 10:44:31 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:12.154 10:44:31 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:12.154 10:44:31 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.154 10:44:31 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:12.154 10:44:31 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.154 [2024-10-08 10:44:31.901194] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:13:12.154 [2024-10-08 10:44:31.901636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84319 ] 00:13:12.154 [2024-10-08 10:44:32.036401] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:12.154 [2024-10-08 10:44:32.054167] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:12.154 [2024-10-08 10:44:32.110713] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.154 [2024-10-08 10:44:32.110779] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:12.414 10:44:32 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:12.414 10:44:32 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:13:12.414 10:44:32 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:13:12.415 10:44:32 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.415 10:44:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.415 [2024-10-08 10:44:32.736830] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:12.415 [2024-10-08 10:44:32.737985] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:12.415 10:44:32 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.415 10:44:32 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:12.415 10:44:32 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.415 10:44:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.415 malloc0 00:13:12.415 10:44:32 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.415 10:44:32 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:13:12.415 10:44:32 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.415 10:44:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:12.415 [2024-10-08 10:44:32.771154] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:13:12.415 [2024-10-08 10:44:32.771189] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:12.415 [2024-10-08 10:44:32.771199] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:13:12.415 [2024-10-08 10:44:32.776854] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:13:12.415 [2024-10-08 10:44:32.776877] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:13:12.415 1 00:13:12.415 10:44:32 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.415 10:44:32 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 84208 00:13:13.354 [2024-10-08 10:44:33.776903] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:13:13.354 [2024-10-08 10:44:33.786820] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:13:13.354 [2024-10-08 10:44:33.786837] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:13:14.287 [2024-10-08 10:44:34.786863] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:13:14.287 [2024-10-08 10:44:34.794818] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:13:14.287 [2024-10-08 10:44:34.794834] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:13:15.235 [2024-10-08 10:44:35.794856] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:13:15.235 [2024-10-08 10:44:35.796831] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:13:15.235 [2024-10-08 10:44:35.796892] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:13:15.235 [2024-10-08 10:44:35.796916] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:13:15.235 [2024-10-08 10:44:35.797029] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:13:37.178 [2024-10-08 10:44:57.320834] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:13:37.178 [2024-10-08 10:44:57.327407] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:13:37.178 [2024-10-08 10:44:57.335029] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:13:37.178 [2024-10-08 10:44:57.335043] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:14:03.715 00:14:03.715 fio_test: (groupid=0, jobs=1): err= 0: pid=84211: Tue Oct 8 10:45:22 2024 00:14:03.715 read: IOPS=14.9k, BW=58.1MiB/s (60.9MB/s)(3487MiB/60002msec) 00:14:03.715 slat (nsec): min=1043, max=125582, avg=4836.24, stdev=1250.85 00:14:03.715 clat (usec): min=763, max=30539k, avg=3884.34, stdev=235297.97 00:14:03.715 lat (usec): min=767, max=30539k, avg=3889.18, stdev=235297.96 00:14:03.715 clat percentiles (usec): 00:14:03.715 | 1.00th=[ 1745], 5.00th=[ 1860], 10.00th=[ 1876], 20.00th=[ 1909], 00:14:03.715 | 30.00th=[ 1926], 40.00th=[ 1942], 50.00th=[ 1942], 60.00th=[ 1958], 00:14:03.715 | 70.00th=[ 1975], 80.00th=[ 1991], 90.00th=[ 2040], 95.00th=[ 2933], 00:14:03.715 | 99.00th=[ 5080], 99.50th=[ 5538], 99.90th=[ 7308], 99.95th=[12125], 00:14:03.715 | 99.99th=[13042] 00:14:03.715 bw ( KiB/s): min=21792, max=125744, per=100.00%, avg=119062.34, stdev=17255.47, samples=59 00:14:03.715 iops : min= 5448, max=31436, avg=29765.58, stdev=4313.87, samples=59 00:14:03.715 write: IOPS=14.9k, BW=58.0MiB/s (60.8MB/s)(3481MiB/60002msec); 0 zone resets 00:14:03.715 slat (nsec): min=1073, max=143245, avg=4858.20, stdev=1184.45 00:14:03.715 clat (usec): min=542, max=30540k, avg=4716.58, stdev=280109.67 00:14:03.715 lat (usec): min=550, max=30540k, avg=4721.44, stdev=280109.67 00:14:03.715 clat percentiles (usec): 00:14:03.715 | 1.00th=[ 1795], 5.00th=[ 1942], 10.00th=[ 1975], 20.00th=[ 1991], 00:14:03.715 | 30.00th=[ 2008], 40.00th=[ 2024], 50.00th=[ 2040], 60.00th=[ 2057], 00:14:03.715 | 70.00th=[ 2073], 80.00th=[ 2089], 90.00th=[ 2114], 95.00th=[ 2835], 00:14:03.715 | 99.00th=[ 5080], 99.50th=[ 5604], 99.90th=[ 7373], 99.95th=[12256], 00:14:03.715 | 99.99th=[16450] 00:14:03.715 bw ( KiB/s): min=21432, max=125864, per=100.00%, avg=118870.27, stdev=17359.60, samples=59 00:14:03.715 iops : min= 5358, max=31466, avg=29717.56, stdev=4339.90, samples=59 00:14:03.715 lat (usec) : 750=0.01%, 1000=0.01% 00:14:03.715 lat (msec) : 2=53.03%, 4=44.30%, 10=2.61%, 20=0.05%, >=2000=0.01% 00:14:03.715 cpu : usr=3.27%, sys=14.87%, ctx=58071, majf=0, minf=13 00:14:03.715 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:14:03.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:03.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:03.715 issued rwts: total=892553,891255,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:03.715 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:03.715 00:14:03.715 Run status group 0 (all jobs): 00:14:03.715 READ: bw=58.1MiB/s (60.9MB/s), 58.1MiB/s-58.1MiB/s (60.9MB/s-60.9MB/s), io=3487MiB (3656MB), run=60002-60002msec 00:14:03.715 WRITE: bw=58.0MiB/s (60.8MB/s), 58.0MiB/s-58.0MiB/s (60.8MB/s-60.8MB/s), io=3481MiB (3651MB), run=60002-60002msec 00:14:03.715 00:14:03.715 Disk stats (read/write): 00:14:03.715 ublkb1: ios=889236/887979, merge=0/0, ticks=3414364/4078826, in_queue=7493190, util=99.88% 00:14:03.715 10:45:22 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:14:03.715 10:45:22 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.715 10:45:22 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:03.715 [2024-10-08 10:45:22.052931] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:03.715 [2024-10-08 10:45:22.092833] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:03.715 [2024-10-08 10:45:22.093063] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:03.715 [2024-10-08 10:45:22.100820] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:03.715 [2024-10-08 10:45:22.100981] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:03.715 [2024-10-08 10:45:22.101043] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:03.715 10:45:22 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.715 10:45:22 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:14:03.715 10:45:22 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.715 10:45:22 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:03.715 [2024-10-08 10:45:22.116886] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:03.715 [2024-10-08 10:45:22.118223] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:03.715 [2024-10-08 10:45:22.118262] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:03.715 10:45:22 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.715 10:45:22 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:14:03.715 10:45:22 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:14:03.715 10:45:22 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 84319 00:14:03.715 10:45:22 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 84319 ']' 00:14:03.715 10:45:22 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 84319 00:14:03.715 10:45:22 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:14:03.715 10:45:22 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:03.715 10:45:22 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84319 00:14:03.715 10:45:22 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:03.715 10:45:22 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:03.715 killing process with pid 84319 00:14:03.715 10:45:22 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84319' 00:14:03.715 10:45:22 ublk_recovery -- common/autotest_common.sh@969 -- # kill 84319 00:14:03.715 10:45:22 ublk_recovery -- common/autotest_common.sh@974 -- # wait 84319 00:14:03.715 [2024-10-08 10:45:22.323135] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:03.715 [2024-10-08 10:45:22.323192] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:03.715 ************************************ 00:14:03.715 END TEST ublk_recovery 00:14:03.715 ************************************ 00:14:03.715 00:14:03.715 real 1m3.025s 00:14:03.715 user 1m44.762s 00:14:03.715 sys 0m21.548s 00:14:03.715 10:45:22 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:03.715 10:45:22 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:03.715 10:45:22 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:14:03.715 10:45:22 -- spdk/autotest.sh@256 -- # timing_exit lib 00:14:03.715 10:45:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:03.715 10:45:22 -- common/autotest_common.sh@10 -- # set +x 00:14:03.715 10:45:22 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:14:03.715 10:45:22 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:14:03.715 10:45:22 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:14:03.715 10:45:22 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:14:03.715 10:45:22 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:14:03.715 10:45:22 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:14:03.715 10:45:22 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:14:03.715 10:45:22 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:14:03.715 10:45:22 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:14:03.715 10:45:22 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:14:03.715 10:45:22 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:03.715 10:45:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:03.715 10:45:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:03.715 10:45:22 -- common/autotest_common.sh@10 -- # set +x 00:14:03.715 ************************************ 00:14:03.715 START TEST ftl 00:14:03.715 ************************************ 00:14:03.715 10:45:22 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:03.715 * Looking for test storage... 00:14:03.716 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:03.716 10:45:22 ftl -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:03.716 10:45:22 ftl -- common/autotest_common.sh@1681 -- # lcov --version 00:14:03.716 10:45:22 ftl -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:03.716 10:45:22 ftl -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:03.716 10:45:22 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:03.716 10:45:22 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:03.716 10:45:22 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:03.716 10:45:22 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:14:03.716 10:45:22 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:14:03.716 10:45:22 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:14:03.716 10:45:22 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:14:03.716 10:45:22 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:14:03.716 10:45:22 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:14:03.716 10:45:22 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:14:03.716 10:45:22 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:03.716 10:45:22 ftl -- scripts/common.sh@344 -- # case "$op" in 00:14:03.716 10:45:22 ftl -- scripts/common.sh@345 -- # : 1 00:14:03.716 10:45:22 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:03.716 10:45:22 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:03.716 10:45:22 ftl -- scripts/common.sh@365 -- # decimal 1 00:14:03.716 10:45:22 ftl -- scripts/common.sh@353 -- # local d=1 00:14:03.716 10:45:22 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:03.716 10:45:22 ftl -- scripts/common.sh@355 -- # echo 1 00:14:03.716 10:45:22 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:14:03.716 10:45:22 ftl -- scripts/common.sh@366 -- # decimal 2 00:14:03.716 10:45:22 ftl -- scripts/common.sh@353 -- # local d=2 00:14:03.716 10:45:22 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:03.716 10:45:22 ftl -- scripts/common.sh@355 -- # echo 2 00:14:03.716 10:45:22 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:14:03.716 10:45:22 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:03.716 10:45:22 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:03.716 10:45:22 ftl -- scripts/common.sh@368 -- # return 0 00:14:03.716 10:45:22 ftl -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:03.716 10:45:22 ftl -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:03.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.716 --rc genhtml_branch_coverage=1 00:14:03.716 --rc genhtml_function_coverage=1 00:14:03.716 --rc genhtml_legend=1 00:14:03.716 --rc geninfo_all_blocks=1 00:14:03.716 --rc geninfo_unexecuted_blocks=1 00:14:03.716 00:14:03.716 ' 00:14:03.716 10:45:22 ftl -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:03.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.716 --rc genhtml_branch_coverage=1 00:14:03.716 --rc genhtml_function_coverage=1 00:14:03.716 --rc genhtml_legend=1 00:14:03.716 --rc geninfo_all_blocks=1 00:14:03.716 --rc geninfo_unexecuted_blocks=1 00:14:03.716 00:14:03.716 ' 00:14:03.716 10:45:22 ftl -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:03.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.716 --rc genhtml_branch_coverage=1 00:14:03.716 --rc genhtml_function_coverage=1 00:14:03.716 --rc genhtml_legend=1 00:14:03.716 --rc geninfo_all_blocks=1 00:14:03.716 --rc geninfo_unexecuted_blocks=1 00:14:03.716 00:14:03.716 ' 00:14:03.716 10:45:22 ftl -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:03.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.716 --rc genhtml_branch_coverage=1 00:14:03.716 --rc genhtml_function_coverage=1 00:14:03.716 --rc genhtml_legend=1 00:14:03.716 --rc geninfo_all_blocks=1 00:14:03.716 --rc geninfo_unexecuted_blocks=1 00:14:03.716 00:14:03.716 ' 00:14:03.716 10:45:22 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:03.716 10:45:22 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:14:03.716 10:45:22 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:03.716 10:45:22 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:03.716 10:45:22 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:03.716 10:45:22 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:03.716 10:45:22 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:03.716 10:45:22 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:03.716 10:45:22 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:03.716 10:45:22 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:03.716 10:45:22 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:03.716 10:45:22 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:03.716 10:45:22 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:03.716 10:45:22 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:03.716 10:45:22 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:03.716 10:45:22 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:03.716 10:45:22 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:03.716 10:45:22 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:03.716 10:45:22 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:03.716 10:45:22 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:03.716 10:45:22 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:03.716 10:45:22 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:03.716 10:45:22 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:03.716 10:45:22 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:03.716 10:45:22 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:03.716 10:45:22 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:03.716 10:45:22 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:03.716 10:45:22 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:03.716 10:45:22 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:03.716 10:45:22 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:03.716 10:45:22 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:14:03.716 10:45:22 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:14:03.716 10:45:22 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:14:03.716 10:45:22 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:14:03.716 10:45:22 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:03.716 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:03.716 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:03.716 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:03.716 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:03.716 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:03.716 10:45:23 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=85112 00:14:03.716 10:45:23 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:14:03.716 10:45:23 ftl -- ftl/ftl.sh@38 -- # waitforlisten 85112 00:14:03.716 10:45:23 ftl -- common/autotest_common.sh@831 -- # '[' -z 85112 ']' 00:14:03.716 10:45:23 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.716 10:45:23 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:03.716 10:45:23 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.716 10:45:23 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:03.716 10:45:23 ftl -- common/autotest_common.sh@10 -- # set +x 00:14:03.716 [2024-10-08 10:45:23.429409] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:14:03.716 [2024-10-08 10:45:23.429526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85112 ] 00:14:03.716 [2024-10-08 10:45:23.559668] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:03.716 [2024-10-08 10:45:23.576073] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.716 [2024-10-08 10:45:23.605698] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.716 10:45:24 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:03.716 10:45:24 ftl -- common/autotest_common.sh@864 -- # return 0 00:14:03.716 10:45:24 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:14:03.975 10:45:24 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:04.235 10:45:24 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:14:04.235 10:45:24 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:04.809 10:45:25 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:14:04.809 10:45:25 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:04.809 10:45:25 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:04.809 10:45:25 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:14:04.809 10:45:25 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:14:04.809 10:45:25 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:14:04.809 10:45:25 ftl -- ftl/ftl.sh@50 -- # break 00:14:04.809 10:45:25 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:14:04.809 10:45:25 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:14:04.809 10:45:25 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:14:04.809 10:45:25 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:14:05.071 10:45:25 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:14:05.071 10:45:25 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:14:05.071 10:45:25 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:14:05.071 10:45:25 ftl -- ftl/ftl.sh@63 -- # break 00:14:05.071 10:45:25 ftl -- ftl/ftl.sh@66 -- # killprocess 85112 00:14:05.071 10:45:25 ftl -- common/autotest_common.sh@950 -- # '[' -z 85112 ']' 00:14:05.071 10:45:25 ftl -- common/autotest_common.sh@954 -- # kill -0 85112 00:14:05.071 10:45:25 ftl -- common/autotest_common.sh@955 -- # uname 00:14:05.071 10:45:25 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:05.071 10:45:25 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85112 00:14:05.071 10:45:25 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:05.071 killing process with pid 85112 00:14:05.071 10:45:25 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:05.071 10:45:25 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85112' 00:14:05.071 10:45:25 ftl -- common/autotest_common.sh@969 -- # kill 85112 00:14:05.071 10:45:25 ftl -- common/autotest_common.sh@974 -- # wait 85112 00:14:05.332 10:45:25 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:14:05.332 10:45:25 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:14:05.332 10:45:25 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:05.332 10:45:25 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:05.332 10:45:25 ftl -- common/autotest_common.sh@10 -- # set +x 00:14:05.332 ************************************ 00:14:05.332 START TEST ftl_fio_basic 00:14:05.332 ************************************ 00:14:05.332 10:45:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:14:05.594 * Looking for test storage... 00:14:05.594 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:14:05.594 10:45:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:05.594 10:45:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lcov --version 00:14:05.594 10:45:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:05.594 10:45:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:05.594 10:45:25 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:05.594 10:45:25 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:05.594 10:45:25 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:05.594 10:45:25 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:14:05.594 10:45:25 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:14:05.594 10:45:25 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:14:05.594 10:45:25 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:14:05.594 10:45:25 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:14:05.594 10:45:25 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:14:05.594 10:45:25 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:14:05.594 10:45:25 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:05.594 10:45:25 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:14:05.594 10:45:25 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:14:05.594 10:45:25 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:05.594 10:45:25 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:05.594 10:45:25 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:14:05.594 10:45:25 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:14:05.594 10:45:25 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:05.594 10:45:25 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:05.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.594 --rc genhtml_branch_coverage=1 00:14:05.594 --rc genhtml_function_coverage=1 00:14:05.594 --rc genhtml_legend=1 00:14:05.594 --rc geninfo_all_blocks=1 00:14:05.594 --rc geninfo_unexecuted_blocks=1 00:14:05.594 00:14:05.594 ' 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:05.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.594 --rc genhtml_branch_coverage=1 00:14:05.594 --rc genhtml_function_coverage=1 00:14:05.594 --rc genhtml_legend=1 00:14:05.594 --rc geninfo_all_blocks=1 00:14:05.594 --rc geninfo_unexecuted_blocks=1 00:14:05.594 00:14:05.594 ' 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:05.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.594 --rc genhtml_branch_coverage=1 00:14:05.594 --rc genhtml_function_coverage=1 00:14:05.594 --rc genhtml_legend=1 00:14:05.594 --rc geninfo_all_blocks=1 00:14:05.594 --rc geninfo_unexecuted_blocks=1 00:14:05.594 00:14:05.594 ' 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:05.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.594 --rc genhtml_branch_coverage=1 00:14:05.594 --rc genhtml_function_coverage=1 00:14:05.594 --rc genhtml_legend=1 00:14:05.594 --rc geninfo_all_blocks=1 00:14:05.594 --rc geninfo_unexecuted_blocks=1 00:14:05.594 00:14:05.594 ' 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:05.594 10:45:26 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:14:05.595 10:45:26 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:05.595 10:45:26 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:14:05.595 10:45:26 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:14:05.595 10:45:26 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:05.595 10:45:26 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:14:05.595 10:45:26 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:14:05.595 10:45:26 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:14:05.595 10:45:26 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:14:05.595 10:45:26 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:14:05.595 10:45:26 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:14:05.595 10:45:26 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:14:05.595 10:45:26 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:14:05.595 10:45:26 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:14:05.595 10:45:26 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:05.595 10:45:26 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:14:05.595 10:45:26 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:14:05.595 10:45:26 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=85228 00:14:05.595 10:45:26 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 85228 00:14:05.595 10:45:26 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 85228 ']' 00:14:05.595 10:45:26 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.595 10:45:26 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:05.595 10:45:26 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.595 10:45:26 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:05.595 10:45:26 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:14:05.595 10:45:26 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:14:05.595 [2024-10-08 10:45:26.098052] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:14:05.595 [2024-10-08 10:45:26.098179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85228 ] 00:14:05.855 [2024-10-08 10:45:26.228439] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:05.855 [2024-10-08 10:45:26.245677] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:05.855 [2024-10-08 10:45:26.298953] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.855 [2024-10-08 10:45:26.299155] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:05.855 [2024-10-08 10:45:26.299311] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.424 10:45:26 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:06.424 10:45:26 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:14:06.424 10:45:26 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:14:06.424 10:45:26 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:14:06.424 10:45:26 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:14:06.424 10:45:26 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:14:06.424 10:45:26 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:14:06.424 10:45:26 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:14:06.684 10:45:27 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:14:06.684 10:45:27 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:14:06.684 10:45:27 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:14:06.684 10:45:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:14:06.684 10:45:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:14:06.684 10:45:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:14:06.684 10:45:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:14:06.684 10:45:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:14:06.944 10:45:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:14:06.944 { 00:14:06.944 "name": "nvme0n1", 00:14:06.944 "aliases": [ 00:14:06.944 "ff7fb366-535e-46b9-9be6-bd7774041412" 00:14:06.944 ], 00:14:06.944 "product_name": "NVMe disk", 00:14:06.944 "block_size": 4096, 00:14:06.944 "num_blocks": 1310720, 00:14:06.944 "uuid": "ff7fb366-535e-46b9-9be6-bd7774041412", 00:14:06.944 "numa_id": -1, 00:14:06.944 "assigned_rate_limits": { 00:14:06.944 "rw_ios_per_sec": 0, 00:14:06.944 "rw_mbytes_per_sec": 0, 00:14:06.944 "r_mbytes_per_sec": 0, 00:14:06.944 "w_mbytes_per_sec": 0 00:14:06.944 }, 00:14:06.944 "claimed": false, 00:14:06.944 "zoned": false, 00:14:06.944 "supported_io_types": { 00:14:06.944 "read": true, 00:14:06.944 "write": true, 00:14:06.944 "unmap": true, 00:14:06.944 "flush": true, 00:14:06.944 "reset": true, 00:14:06.944 "nvme_admin": true, 00:14:06.944 "nvme_io": true, 00:14:06.944 "nvme_io_md": false, 00:14:06.944 "write_zeroes": true, 00:14:06.944 "zcopy": false, 00:14:06.944 "get_zone_info": false, 00:14:06.944 "zone_management": false, 00:14:06.944 "zone_append": false, 00:14:06.944 "compare": true, 00:14:06.944 "compare_and_write": false, 00:14:06.944 "abort": true, 00:14:06.944 "seek_hole": false, 00:14:06.944 "seek_data": false, 00:14:06.944 "copy": true, 00:14:06.944 "nvme_iov_md": false 00:14:06.944 }, 00:14:06.944 "driver_specific": { 00:14:06.944 "nvme": [ 00:14:06.944 { 00:14:06.944 "pci_address": "0000:00:11.0", 00:14:06.944 "trid": { 00:14:06.944 "trtype": "PCIe", 00:14:06.944 "traddr": "0000:00:11.0" 00:14:06.944 }, 00:14:06.944 "ctrlr_data": { 00:14:06.944 "cntlid": 0, 00:14:06.944 "vendor_id": "0x1b36", 00:14:06.944 "model_number": "QEMU NVMe Ctrl", 00:14:06.944 "serial_number": "12341", 00:14:06.944 "firmware_revision": "8.0.0", 00:14:06.944 "subnqn": "nqn.2019-08.org.qemu:12341", 00:14:06.944 "oacs": { 00:14:06.944 "security": 0, 00:14:06.944 "format": 1, 00:14:06.944 "firmware": 0, 00:14:06.944 "ns_manage": 1 00:14:06.944 }, 00:14:06.944 "multi_ctrlr": false, 00:14:06.944 "ana_reporting": false 00:14:06.944 }, 00:14:06.944 "vs": { 00:14:06.944 "nvme_version": "1.4" 00:14:06.944 }, 00:14:06.944 "ns_data": { 00:14:06.944 "id": 1, 00:14:06.944 "can_share": false 00:14:06.944 } 00:14:06.944 } 00:14:06.944 ], 00:14:06.944 "mp_policy": "active_passive" 00:14:06.944 } 00:14:06.944 } 00:14:06.944 ]' 00:14:06.944 10:45:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:14:06.944 10:45:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:14:06.944 10:45:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:14:06.944 10:45:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:14:06.944 10:45:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:14:06.944 10:45:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:14:06.944 10:45:27 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:14:06.944 10:45:27 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:14:06.944 10:45:27 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:14:06.944 10:45:27 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:06.944 10:45:27 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:14:07.204 10:45:27 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:14:07.204 10:45:27 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:14:07.463 10:45:27 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=1c16ce69-477d-4a83-8d7f-00dc7b57d129 00:14:07.463 10:45:27 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 1c16ce69-477d-4a83-8d7f-00dc7b57d129 00:14:07.720 10:45:28 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=daedfda8-ad8f-42f4-a15d-c205cd20e796 00:14:07.720 10:45:28 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 daedfda8-ad8f-42f4-a15d-c205cd20e796 00:14:07.720 10:45:28 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:14:07.720 10:45:28 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:14:07.720 10:45:28 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=daedfda8-ad8f-42f4-a15d-c205cd20e796 00:14:07.720 10:45:28 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:14:07.720 10:45:28 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size daedfda8-ad8f-42f4-a15d-c205cd20e796 00:14:07.720 10:45:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=daedfda8-ad8f-42f4-a15d-c205cd20e796 00:14:07.720 10:45:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:14:07.720 10:45:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:14:07.720 10:45:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:14:07.720 10:45:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b daedfda8-ad8f-42f4-a15d-c205cd20e796 00:14:07.720 10:45:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:14:07.720 { 00:14:07.720 "name": "daedfda8-ad8f-42f4-a15d-c205cd20e796", 00:14:07.720 "aliases": [ 00:14:07.720 "lvs/nvme0n1p0" 00:14:07.720 ], 00:14:07.721 "product_name": "Logical Volume", 00:14:07.721 "block_size": 4096, 00:14:07.721 "num_blocks": 26476544, 00:14:07.721 "uuid": "daedfda8-ad8f-42f4-a15d-c205cd20e796", 00:14:07.721 "assigned_rate_limits": { 00:14:07.721 "rw_ios_per_sec": 0, 00:14:07.721 "rw_mbytes_per_sec": 0, 00:14:07.721 "r_mbytes_per_sec": 0, 00:14:07.721 "w_mbytes_per_sec": 0 00:14:07.721 }, 00:14:07.721 "claimed": false, 00:14:07.721 "zoned": false, 00:14:07.721 "supported_io_types": { 00:14:07.721 "read": true, 00:14:07.721 "write": true, 00:14:07.721 "unmap": true, 00:14:07.721 "flush": false, 00:14:07.721 "reset": true, 00:14:07.721 "nvme_admin": false, 00:14:07.721 "nvme_io": false, 00:14:07.721 "nvme_io_md": false, 00:14:07.721 "write_zeroes": true, 00:14:07.721 "zcopy": false, 00:14:07.721 "get_zone_info": false, 00:14:07.721 "zone_management": false, 00:14:07.721 "zone_append": false, 00:14:07.721 "compare": false, 00:14:07.721 "compare_and_write": false, 00:14:07.721 "abort": false, 00:14:07.721 "seek_hole": true, 00:14:07.721 "seek_data": true, 00:14:07.721 "copy": false, 00:14:07.721 "nvme_iov_md": false 00:14:07.721 }, 00:14:07.721 "driver_specific": { 00:14:07.721 "lvol": { 00:14:07.721 "lvol_store_uuid": "1c16ce69-477d-4a83-8d7f-00dc7b57d129", 00:14:07.721 "base_bdev": "nvme0n1", 00:14:07.721 "thin_provision": true, 00:14:07.721 "num_allocated_clusters": 0, 00:14:07.721 "snapshot": false, 00:14:07.721 "clone": false, 00:14:07.721 "esnap_clone": false 00:14:07.721 } 00:14:07.721 } 00:14:07.721 } 00:14:07.721 ]' 00:14:07.721 10:45:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:14:07.721 10:45:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:14:07.721 10:45:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:14:07.977 10:45:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:14:07.977 10:45:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:14:07.977 10:45:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:14:07.977 10:45:28 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:14:07.977 10:45:28 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:14:07.977 10:45:28 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:14:08.234 10:45:28 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:14:08.234 10:45:28 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:14:08.234 10:45:28 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size daedfda8-ad8f-42f4-a15d-c205cd20e796 00:14:08.234 10:45:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=daedfda8-ad8f-42f4-a15d-c205cd20e796 00:14:08.234 10:45:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:14:08.234 10:45:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:14:08.234 10:45:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:14:08.234 10:45:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b daedfda8-ad8f-42f4-a15d-c205cd20e796 00:14:08.234 10:45:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:14:08.234 { 00:14:08.234 "name": "daedfda8-ad8f-42f4-a15d-c205cd20e796", 00:14:08.234 "aliases": [ 00:14:08.234 "lvs/nvme0n1p0" 00:14:08.234 ], 00:14:08.234 "product_name": "Logical Volume", 00:14:08.234 "block_size": 4096, 00:14:08.234 "num_blocks": 26476544, 00:14:08.234 "uuid": "daedfda8-ad8f-42f4-a15d-c205cd20e796", 00:14:08.234 "assigned_rate_limits": { 00:14:08.234 "rw_ios_per_sec": 0, 00:14:08.234 "rw_mbytes_per_sec": 0, 00:14:08.234 "r_mbytes_per_sec": 0, 00:14:08.234 "w_mbytes_per_sec": 0 00:14:08.234 }, 00:14:08.234 "claimed": false, 00:14:08.234 "zoned": false, 00:14:08.234 "supported_io_types": { 00:14:08.234 "read": true, 00:14:08.234 "write": true, 00:14:08.234 "unmap": true, 00:14:08.234 "flush": false, 00:14:08.234 "reset": true, 00:14:08.234 "nvme_admin": false, 00:14:08.234 "nvme_io": false, 00:14:08.234 "nvme_io_md": false, 00:14:08.234 "write_zeroes": true, 00:14:08.234 "zcopy": false, 00:14:08.234 "get_zone_info": false, 00:14:08.234 "zone_management": false, 00:14:08.234 "zone_append": false, 00:14:08.234 "compare": false, 00:14:08.234 "compare_and_write": false, 00:14:08.234 "abort": false, 00:14:08.234 "seek_hole": true, 00:14:08.234 "seek_data": true, 00:14:08.234 "copy": false, 00:14:08.234 "nvme_iov_md": false 00:14:08.234 }, 00:14:08.234 "driver_specific": { 00:14:08.234 "lvol": { 00:14:08.234 "lvol_store_uuid": "1c16ce69-477d-4a83-8d7f-00dc7b57d129", 00:14:08.234 "base_bdev": "nvme0n1", 00:14:08.234 "thin_provision": true, 00:14:08.234 "num_allocated_clusters": 0, 00:14:08.234 "snapshot": false, 00:14:08.234 "clone": false, 00:14:08.234 "esnap_clone": false 00:14:08.234 } 00:14:08.234 } 00:14:08.234 } 00:14:08.234 ]' 00:14:08.234 10:45:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:14:08.491 10:45:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:14:08.491 10:45:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:14:08.491 10:45:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:14:08.491 10:45:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:14:08.491 10:45:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:14:08.492 10:45:28 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:14:08.492 10:45:28 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:14:08.492 10:45:29 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:14:08.492 10:45:29 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:14:08.492 10:45:29 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:14:08.492 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:14:08.492 10:45:29 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size daedfda8-ad8f-42f4-a15d-c205cd20e796 00:14:08.492 10:45:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=daedfda8-ad8f-42f4-a15d-c205cd20e796 00:14:08.492 10:45:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:14:08.492 10:45:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:14:08.492 10:45:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:14:08.492 10:45:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b daedfda8-ad8f-42f4-a15d-c205cd20e796 00:14:08.749 10:45:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:14:08.749 { 00:14:08.749 "name": "daedfda8-ad8f-42f4-a15d-c205cd20e796", 00:14:08.749 "aliases": [ 00:14:08.749 "lvs/nvme0n1p0" 00:14:08.749 ], 00:14:08.749 "product_name": "Logical Volume", 00:14:08.749 "block_size": 4096, 00:14:08.749 "num_blocks": 26476544, 00:14:08.749 "uuid": "daedfda8-ad8f-42f4-a15d-c205cd20e796", 00:14:08.749 "assigned_rate_limits": { 00:14:08.749 "rw_ios_per_sec": 0, 00:14:08.749 "rw_mbytes_per_sec": 0, 00:14:08.749 "r_mbytes_per_sec": 0, 00:14:08.749 "w_mbytes_per_sec": 0 00:14:08.749 }, 00:14:08.749 "claimed": false, 00:14:08.749 "zoned": false, 00:14:08.749 "supported_io_types": { 00:14:08.749 "read": true, 00:14:08.749 "write": true, 00:14:08.749 "unmap": true, 00:14:08.749 "flush": false, 00:14:08.749 "reset": true, 00:14:08.749 "nvme_admin": false, 00:14:08.749 "nvme_io": false, 00:14:08.749 "nvme_io_md": false, 00:14:08.749 "write_zeroes": true, 00:14:08.749 "zcopy": false, 00:14:08.749 "get_zone_info": false, 00:14:08.749 "zone_management": false, 00:14:08.749 "zone_append": false, 00:14:08.749 "compare": false, 00:14:08.749 "compare_and_write": false, 00:14:08.749 "abort": false, 00:14:08.749 "seek_hole": true, 00:14:08.749 "seek_data": true, 00:14:08.749 "copy": false, 00:14:08.749 "nvme_iov_md": false 00:14:08.749 }, 00:14:08.749 "driver_specific": { 00:14:08.749 "lvol": { 00:14:08.749 "lvol_store_uuid": "1c16ce69-477d-4a83-8d7f-00dc7b57d129", 00:14:08.749 "base_bdev": "nvme0n1", 00:14:08.749 "thin_provision": true, 00:14:08.749 "num_allocated_clusters": 0, 00:14:08.749 "snapshot": false, 00:14:08.749 "clone": false, 00:14:08.749 "esnap_clone": false 00:14:08.749 } 00:14:08.749 } 00:14:08.749 } 00:14:08.749 ]' 00:14:08.749 10:45:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:14:08.749 10:45:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:14:08.749 10:45:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:14:08.749 10:45:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:14:08.749 10:45:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:14:08.749 10:45:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:14:08.749 10:45:29 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:14:08.749 10:45:29 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:14:08.749 10:45:29 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d daedfda8-ad8f-42f4-a15d-c205cd20e796 -c nvc0n1p0 --l2p_dram_limit 60 00:14:09.009 [2024-10-08 10:45:29.496786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:09.009 [2024-10-08 10:45:29.496832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:14:09.009 [2024-10-08 10:45:29.496845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:14:09.009 [2024-10-08 10:45:29.496860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:09.009 [2024-10-08 10:45:29.496933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:09.009 [2024-10-08 10:45:29.496949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:09.009 [2024-10-08 10:45:29.496960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:14:09.009 [2024-10-08 10:45:29.496965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:09.009 [2024-10-08 10:45:29.497003] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:14:09.009 [2024-10-08 10:45:29.497218] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:14:09.009 [2024-10-08 10:45:29.497239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:09.009 [2024-10-08 10:45:29.497245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:09.009 [2024-10-08 10:45:29.497254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:14:09.009 [2024-10-08 10:45:29.497260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:09.009 [2024-10-08 10:45:29.497324] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 974c851b-c234-4dd8-8c6d-d60e155d709a 00:14:09.009 [2024-10-08 10:45:29.498351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:09.009 [2024-10-08 10:45:29.498376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:14:09.009 [2024-10-08 10:45:29.498384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:14:09.009 [2024-10-08 10:45:29.498391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:09.009 [2024-10-08 10:45:29.503609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:09.009 [2024-10-08 10:45:29.503636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:09.010 [2024-10-08 10:45:29.503643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.145 ms 00:14:09.010 [2024-10-08 10:45:29.503655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:09.010 [2024-10-08 10:45:29.503742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:09.010 [2024-10-08 10:45:29.503752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:09.010 [2024-10-08 10:45:29.503758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:14:09.010 [2024-10-08 10:45:29.503765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:09.010 [2024-10-08 10:45:29.503818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:09.010 [2024-10-08 10:45:29.503832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:14:09.010 [2024-10-08 10:45:29.503839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:14:09.010 [2024-10-08 10:45:29.503846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:09.010 [2024-10-08 10:45:29.503872] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:14:09.010 [2024-10-08 10:45:29.505188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:09.010 [2024-10-08 10:45:29.505210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:09.010 [2024-10-08 10:45:29.505219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.318 ms 00:14:09.010 [2024-10-08 10:45:29.505225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:09.010 [2024-10-08 10:45:29.505258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:09.010 [2024-10-08 10:45:29.505263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:14:09.010 [2024-10-08 10:45:29.505272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:14:09.010 [2024-10-08 10:45:29.505278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:09.010 [2024-10-08 10:45:29.505300] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:14:09.010 [2024-10-08 10:45:29.505429] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:14:09.010 [2024-10-08 10:45:29.505442] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:14:09.010 [2024-10-08 10:45:29.505458] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:14:09.010 [2024-10-08 10:45:29.505479] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:14:09.010 [2024-10-08 10:45:29.505496] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:14:09.010 [2024-10-08 10:45:29.505514] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:14:09.010 [2024-10-08 10:45:29.505521] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:14:09.010 [2024-10-08 10:45:29.505528] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:14:09.010 [2024-10-08 10:45:29.505533] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:14:09.010 [2024-10-08 10:45:29.505541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:09.010 [2024-10-08 10:45:29.505547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:14:09.010 [2024-10-08 10:45:29.505554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:14:09.010 [2024-10-08 10:45:29.505559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:09.010 [2024-10-08 10:45:29.505634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:09.010 [2024-10-08 10:45:29.505644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:14:09.010 [2024-10-08 10:45:29.505652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:14:09.010 [2024-10-08 10:45:29.505658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:09.010 [2024-10-08 10:45:29.505770] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:14:09.010 [2024-10-08 10:45:29.505781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:14:09.010 [2024-10-08 10:45:29.505788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:09.010 [2024-10-08 10:45:29.505811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:09.010 [2024-10-08 10:45:29.505818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:14:09.010 [2024-10-08 10:45:29.505823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:14:09.010 [2024-10-08 10:45:29.505830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:14:09.010 [2024-10-08 10:45:29.505835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:14:09.010 [2024-10-08 10:45:29.505843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:14:09.010 [2024-10-08 10:45:29.505849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:09.010 [2024-10-08 10:45:29.505856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:14:09.010 [2024-10-08 10:45:29.505862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:14:09.010 [2024-10-08 10:45:29.505871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:14:09.010 [2024-10-08 10:45:29.505877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:14:09.010 [2024-10-08 10:45:29.505884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:14:09.010 [2024-10-08 10:45:29.505889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:09.010 [2024-10-08 10:45:29.505896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:14:09.010 [2024-10-08 10:45:29.505902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:14:09.010 [2024-10-08 10:45:29.505911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:09.010 [2024-10-08 10:45:29.505917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:14:09.010 [2024-10-08 10:45:29.505925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:14:09.010 [2024-10-08 10:45:29.505934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:14:09.010 [2024-10-08 10:45:29.505942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:14:09.010 [2024-10-08 10:45:29.505948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:14:09.010 [2024-10-08 10:45:29.505955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:14:09.010 [2024-10-08 10:45:29.505972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:14:09.010 [2024-10-08 10:45:29.505979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:14:09.010 [2024-10-08 10:45:29.505985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:14:09.010 [2024-10-08 10:45:29.505994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:14:09.010 [2024-10-08 10:45:29.506001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:14:09.010 [2024-10-08 10:45:29.506008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:14:09.010 [2024-10-08 10:45:29.506013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:14:09.010 [2024-10-08 10:45:29.506020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:14:09.010 [2024-10-08 10:45:29.506026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:09.010 [2024-10-08 10:45:29.506033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:14:09.010 [2024-10-08 10:45:29.506039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:14:09.010 [2024-10-08 10:45:29.506046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:14:09.010 [2024-10-08 10:45:29.506053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:14:09.010 [2024-10-08 10:45:29.506060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:14:09.010 [2024-10-08 10:45:29.506065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:09.010 [2024-10-08 10:45:29.506072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:14:09.010 [2024-10-08 10:45:29.506078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:14:09.010 [2024-10-08 10:45:29.506085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:09.010 [2024-10-08 10:45:29.506090] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:14:09.010 [2024-10-08 10:45:29.506099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:14:09.010 [2024-10-08 10:45:29.506113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:14:09.010 [2024-10-08 10:45:29.506120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:14:09.010 [2024-10-08 10:45:29.506126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:14:09.010 [2024-10-08 10:45:29.506133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:14:09.010 [2024-10-08 10:45:29.506139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:14:09.010 [2024-10-08 10:45:29.506148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:14:09.010 [2024-10-08 10:45:29.506158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:14:09.010 [2024-10-08 10:45:29.506169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:14:09.010 [2024-10-08 10:45:29.506177] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:14:09.010 [2024-10-08 10:45:29.506195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:09.010 [2024-10-08 10:45:29.506203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:14:09.010 [2024-10-08 10:45:29.506210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:14:09.010 [2024-10-08 10:45:29.506216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:14:09.010 [2024-10-08 10:45:29.506223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:14:09.010 [2024-10-08 10:45:29.506229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:14:09.010 [2024-10-08 10:45:29.506237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:14:09.010 [2024-10-08 10:45:29.506243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:14:09.010 [2024-10-08 10:45:29.506249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:14:09.010 [2024-10-08 10:45:29.506254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:14:09.010 [2024-10-08 10:45:29.506261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:14:09.010 [2024-10-08 10:45:29.506266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:14:09.010 [2024-10-08 10:45:29.506272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:14:09.010 [2024-10-08 10:45:29.506277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:14:09.010 [2024-10-08 10:45:29.506284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:14:09.011 [2024-10-08 10:45:29.506289] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:14:09.011 [2024-10-08 10:45:29.506296] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:14:09.011 [2024-10-08 10:45:29.506302] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:14:09.011 [2024-10-08 10:45:29.506308] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:14:09.011 [2024-10-08 10:45:29.506314] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:14:09.011 [2024-10-08 10:45:29.506321] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:14:09.011 [2024-10-08 10:45:29.506327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:09.011 [2024-10-08 10:45:29.506335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:14:09.011 [2024-10-08 10:45:29.506341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.610 ms 00:14:09.011 [2024-10-08 10:45:29.506355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:09.011 [2024-10-08 10:45:29.506422] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:14:09.011 [2024-10-08 10:45:29.506430] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:14:14.328 [2024-10-08 10:45:33.883329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:14.328 [2024-10-08 10:45:33.883387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:14:14.328 [2024-10-08 10:45:33.883402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4376.890 ms 00:14:14.328 [2024-10-08 10:45:33.883412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.328 [2024-10-08 10:45:33.902467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:14.328 [2024-10-08 10:45:33.902544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:14.328 [2024-10-08 10:45:33.902567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.884 ms 00:14:14.328 [2024-10-08 10:45:33.902589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.328 [2024-10-08 10:45:33.902840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:14.328 [2024-10-08 10:45:33.902874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:14:14.328 [2024-10-08 10:45:33.902911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:14:14.328 [2024-10-08 10:45:33.902935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.328 [2024-10-08 10:45:33.914696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:14.328 [2024-10-08 10:45:33.914734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:14.328 [2024-10-08 10:45:33.914745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.665 ms 00:14:14.328 [2024-10-08 10:45:33.914758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.328 [2024-10-08 10:45:33.914813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:14.328 [2024-10-08 10:45:33.914824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:14.328 [2024-10-08 10:45:33.914832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:14:14.328 [2024-10-08 10:45:33.914842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.328 [2024-10-08 10:45:33.915194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:14.328 [2024-10-08 10:45:33.915220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:14.328 [2024-10-08 10:45:33.915228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:14:14.328 [2024-10-08 10:45:33.915239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.328 [2024-10-08 10:45:33.915371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:14.328 [2024-10-08 10:45:33.915388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:14.328 [2024-10-08 10:45:33.915398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:14:14.328 [2024-10-08 10:45:33.915408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.328 [2024-10-08 10:45:33.920823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:14.328 [2024-10-08 10:45:33.920854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:14.328 [2024-10-08 10:45:33.920863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.383 ms 00:14:14.328 [2024-10-08 10:45:33.920872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.328 [2024-10-08 10:45:33.929134] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:14:14.328 [2024-10-08 10:45:33.943250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:14.328 [2024-10-08 10:45:33.943281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:14:14.328 [2024-10-08 10:45:33.943293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.280 ms 00:14:14.328 [2024-10-08 10:45:33.943301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.328 [2024-10-08 10:45:33.996031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:14.328 [2024-10-08 10:45:33.996083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:14:14.328 [2024-10-08 10:45:33.996103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.687 ms 00:14:14.328 [2024-10-08 10:45:33.996111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.328 [2024-10-08 10:45:33.996301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:14.328 [2024-10-08 10:45:33.996313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:14:14.328 [2024-10-08 10:45:33.996327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:14:14.328 [2024-10-08 10:45:33.996335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.328 [2024-10-08 10:45:33.999920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:14.328 [2024-10-08 10:45:33.999954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:14:14.328 [2024-10-08 10:45:33.999968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.534 ms 00:14:14.328 [2024-10-08 10:45:33.999975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.328 [2024-10-08 10:45:34.002614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:14.328 [2024-10-08 10:45:34.002645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:14:14.328 [2024-10-08 10:45:34.002658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.589 ms 00:14:14.328 [2024-10-08 10:45:34.002666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.328 [2024-10-08 10:45:34.003017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:14.328 [2024-10-08 10:45:34.003033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:14:14.328 [2024-10-08 10:45:34.003055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:14:14.328 [2024-10-08 10:45:34.003063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.328 [2024-10-08 10:45:34.035115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:14.328 [2024-10-08 10:45:34.035149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:14:14.328 [2024-10-08 10:45:34.035163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.008 ms 00:14:14.328 [2024-10-08 10:45:34.035171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.328 [2024-10-08 10:45:34.039421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:14.328 [2024-10-08 10:45:34.039453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:14:14.328 [2024-10-08 10:45:34.039475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.181 ms 00:14:14.328 [2024-10-08 10:45:34.039483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.328 [2024-10-08 10:45:34.043444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:14.328 [2024-10-08 10:45:34.043474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:14:14.328 [2024-10-08 10:45:34.043485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.913 ms 00:14:14.328 [2024-10-08 10:45:34.043492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.328 [2024-10-08 10:45:34.048055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:14.328 [2024-10-08 10:45:34.048090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:14:14.328 [2024-10-08 10:45:34.048104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.509 ms 00:14:14.328 [2024-10-08 10:45:34.048111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.328 [2024-10-08 10:45:34.048164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:14.328 [2024-10-08 10:45:34.048174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:14:14.328 [2024-10-08 10:45:34.048184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:14:14.328 [2024-10-08 10:45:34.048191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.328 [2024-10-08 10:45:34.048278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:14.328 [2024-10-08 10:45:34.048299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:14:14.328 [2024-10-08 10:45:34.048309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:14:14.328 [2024-10-08 10:45:34.048319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.328 [2024-10-08 10:45:34.049428] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4552.215 ms, result 0 00:14:14.328 { 00:14:14.328 "name": "ftl0", 00:14:14.328 "uuid": "974c851b-c234-4dd8-8c6d-d60e155d709a" 00:14:14.328 } 00:14:14.328 10:45:34 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:14:14.328 10:45:34 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:14:14.328 10:45:34 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:14.328 10:45:34 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:14:14.328 10:45:34 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:14.328 10:45:34 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:14.328 10:45:34 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:14.328 10:45:34 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:14:14.328 [ 00:14:14.328 { 00:14:14.328 "name": "ftl0", 00:14:14.328 "aliases": [ 00:14:14.328 "974c851b-c234-4dd8-8c6d-d60e155d709a" 00:14:14.328 ], 00:14:14.328 "product_name": "FTL disk", 00:14:14.328 "block_size": 4096, 00:14:14.328 "num_blocks": 20971520, 00:14:14.328 "uuid": "974c851b-c234-4dd8-8c6d-d60e155d709a", 00:14:14.328 "assigned_rate_limits": { 00:14:14.328 "rw_ios_per_sec": 0, 00:14:14.328 "rw_mbytes_per_sec": 0, 00:14:14.328 "r_mbytes_per_sec": 0, 00:14:14.328 "w_mbytes_per_sec": 0 00:14:14.328 }, 00:14:14.328 "claimed": false, 00:14:14.328 "zoned": false, 00:14:14.328 "supported_io_types": { 00:14:14.328 "read": true, 00:14:14.328 "write": true, 00:14:14.328 "unmap": true, 00:14:14.328 "flush": true, 00:14:14.328 "reset": false, 00:14:14.328 "nvme_admin": false, 00:14:14.328 "nvme_io": false, 00:14:14.328 "nvme_io_md": false, 00:14:14.328 "write_zeroes": true, 00:14:14.328 "zcopy": false, 00:14:14.328 "get_zone_info": false, 00:14:14.328 "zone_management": false, 00:14:14.328 "zone_append": false, 00:14:14.328 "compare": false, 00:14:14.329 "compare_and_write": false, 00:14:14.329 "abort": false, 00:14:14.329 "seek_hole": false, 00:14:14.329 "seek_data": false, 00:14:14.329 "copy": false, 00:14:14.329 "nvme_iov_md": false 00:14:14.329 }, 00:14:14.329 "driver_specific": { 00:14:14.329 "ftl": { 00:14:14.329 "base_bdev": "daedfda8-ad8f-42f4-a15d-c205cd20e796", 00:14:14.329 "cache": "nvc0n1p0" 00:14:14.329 } 00:14:14.329 } 00:14:14.329 } 00:14:14.329 ] 00:14:14.329 10:45:34 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:14:14.329 10:45:34 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:14:14.329 10:45:34 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:14:14.329 10:45:34 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:14:14.329 10:45:34 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:14:14.329 [2024-10-08 10:45:34.852108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:14.329 [2024-10-08 10:45:34.852153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:14:14.329 [2024-10-08 10:45:34.852165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:14:14.329 [2024-10-08 10:45:34.852175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.329 [2024-10-08 10:45:34.852206] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:14:14.329 [2024-10-08 10:45:34.852685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:14.329 [2024-10-08 10:45:34.852709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:14:14.329 [2024-10-08 10:45:34.852720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:14:14.329 [2024-10-08 10:45:34.852727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.329 [2024-10-08 10:45:34.853285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:14.329 [2024-10-08 10:45:34.853306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:14:14.329 [2024-10-08 10:45:34.853317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:14:14.329 [2024-10-08 10:45:34.853325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.329 [2024-10-08 10:45:34.856577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:14.329 [2024-10-08 10:45:34.856596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:14:14.329 [2024-10-08 10:45:34.856606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.215 ms 00:14:14.329 [2024-10-08 10:45:34.856614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.329 [2024-10-08 10:45:34.862769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:14.329 [2024-10-08 10:45:34.862809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:14:14.329 [2024-10-08 10:45:34.862834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.122 ms 00:14:14.329 [2024-10-08 10:45:34.862841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.329 [2024-10-08 10:45:34.864894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:14.329 [2024-10-08 10:45:34.864925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:14:14.329 [2024-10-08 10:45:34.864937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.927 ms 00:14:14.329 [2024-10-08 10:45:34.864944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.329 [2024-10-08 10:45:34.869438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:14.329 [2024-10-08 10:45:34.869470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:14:14.329 [2024-10-08 10:45:34.869481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.441 ms 00:14:14.329 [2024-10-08 10:45:34.869490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.329 [2024-10-08 10:45:34.869669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:14.329 [2024-10-08 10:45:34.869679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:14:14.329 [2024-10-08 10:45:34.869689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:14:14.329 [2024-10-08 10:45:34.869696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.329 [2024-10-08 10:45:34.871955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:14.329 [2024-10-08 10:45:34.871986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:14:14.329 [2024-10-08 10:45:34.871997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.228 ms 00:14:14.329 [2024-10-08 10:45:34.872003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.329 [2024-10-08 10:45:34.874368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:14.329 [2024-10-08 10:45:34.874397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:14:14.329 [2024-10-08 10:45:34.874408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.317 ms 00:14:14.329 [2024-10-08 10:45:34.874414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.329 [2024-10-08 10:45:34.876142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:14.329 [2024-10-08 10:45:34.876172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:14:14.329 [2024-10-08 10:45:34.876182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.684 ms 00:14:14.329 [2024-10-08 10:45:34.876188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.329 [2024-10-08 10:45:34.877927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:14.329 [2024-10-08 10:45:34.877957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:14:14.329 [2024-10-08 10:45:34.877968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.631 ms 00:14:14.329 [2024-10-08 10:45:34.877974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.329 [2024-10-08 10:45:34.878017] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:14:14.329 [2024-10-08 10:45:34.878030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:14:14.329 [2024-10-08 10:45:34.878418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:14:14.330 [2024-10-08 10:45:34.878912] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:14:14.330 [2024-10-08 10:45:34.878922] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 974c851b-c234-4dd8-8c6d-d60e155d709a 00:14:14.330 [2024-10-08 10:45:34.878930] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:14:14.330 [2024-10-08 10:45:34.878940] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:14:14.330 [2024-10-08 10:45:34.878958] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:14:14.330 [2024-10-08 10:45:34.878967] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:14:14.330 [2024-10-08 10:45:34.878974] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:14:14.330 [2024-10-08 10:45:34.878982] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:14:14.330 [2024-10-08 10:45:34.879004] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:14:14.330 [2024-10-08 10:45:34.879012] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:14:14.330 [2024-10-08 10:45:34.879018] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:14:14.330 [2024-10-08 10:45:34.879027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:14.330 [2024-10-08 10:45:34.879034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:14:14.330 [2024-10-08 10:45:34.879044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.011 ms 00:14:14.330 [2024-10-08 10:45:34.879051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.330 [2024-10-08 10:45:34.880583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:14.330 [2024-10-08 10:45:34.880608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:14:14.330 [2024-10-08 10:45:34.880619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.502 ms 00:14:14.330 [2024-10-08 10:45:34.880626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.330 [2024-10-08 10:45:34.880724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:14:14.330 [2024-10-08 10:45:34.880733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:14:14.330 [2024-10-08 10:45:34.880743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:14:14.330 [2024-10-08 10:45:34.880750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.330 [2024-10-08 10:45:34.886146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:14.330 [2024-10-08 10:45:34.886178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:14:14.330 [2024-10-08 10:45:34.886189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:14.330 [2024-10-08 10:45:34.886196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.330 [2024-10-08 10:45:34.886258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:14.330 [2024-10-08 10:45:34.886266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:14:14.330 [2024-10-08 10:45:34.886275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:14.330 [2024-10-08 10:45:34.886282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.330 [2024-10-08 10:45:34.886385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:14.330 [2024-10-08 10:45:34.886395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:14:14.330 [2024-10-08 10:45:34.886404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:14.330 [2024-10-08 10:45:34.886411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.330 [2024-10-08 10:45:34.886461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:14.330 [2024-10-08 10:45:34.886470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:14:14.330 [2024-10-08 10:45:34.886488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:14.330 [2024-10-08 10:45:34.886495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.330 [2024-10-08 10:45:34.896035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:14.330 [2024-10-08 10:45:34.896068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:14:14.330 [2024-10-08 10:45:34.896079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:14.330 [2024-10-08 10:45:34.896086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.592 [2024-10-08 10:45:34.903999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:14.592 [2024-10-08 10:45:34.904032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:14:14.592 [2024-10-08 10:45:34.904044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:14.592 [2024-10-08 10:45:34.904051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.592 [2024-10-08 10:45:34.904125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:14.592 [2024-10-08 10:45:34.904136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:14:14.592 [2024-10-08 10:45:34.904158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:14.592 [2024-10-08 10:45:34.904165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.592 [2024-10-08 10:45:34.904224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:14.592 [2024-10-08 10:45:34.904242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:14:14.592 [2024-10-08 10:45:34.904251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:14.592 [2024-10-08 10:45:34.904258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.592 [2024-10-08 10:45:34.904340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:14.592 [2024-10-08 10:45:34.904350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:14:14.592 [2024-10-08 10:45:34.904361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:14.592 [2024-10-08 10:45:34.904368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.592 [2024-10-08 10:45:34.904425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:14.592 [2024-10-08 10:45:34.904434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:14:14.592 [2024-10-08 10:45:34.904444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:14.592 [2024-10-08 10:45:34.904451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.592 [2024-10-08 10:45:34.904499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:14.592 [2024-10-08 10:45:34.904512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:14:14.592 [2024-10-08 10:45:34.904533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:14.592 [2024-10-08 10:45:34.904541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.592 [2024-10-08 10:45:34.904597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:14:14.592 [2024-10-08 10:45:34.904607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:14:14.592 [2024-10-08 10:45:34.904617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:14:14.592 [2024-10-08 10:45:34.904625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:14:14.592 [2024-10-08 10:45:34.904828] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 52.679 ms, result 0 00:14:14.592 true 00:14:14.592 10:45:34 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 85228 00:14:14.592 10:45:34 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 85228 ']' 00:14:14.592 10:45:34 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 85228 00:14:14.592 10:45:34 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:14:14.592 10:45:34 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:14.592 10:45:34 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85228 00:14:14.592 10:45:34 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:14.592 10:45:34 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:14.592 10:45:34 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85228' 00:14:14.592 killing process with pid 85228 00:14:14.592 10:45:34 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 85228 00:14:14.592 10:45:34 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 85228 00:14:15.535 10:45:36 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:14:15.535 10:45:36 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:14:15.535 10:45:36 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:14:15.535 10:45:36 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:15.535 10:45:36 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:14:15.796 10:45:36 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:15.796 10:45:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:15.796 10:45:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:15.796 10:45:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:15.796 10:45:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:15.796 10:45:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:15.796 10:45:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:14:15.796 10:45:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:15.796 10:45:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:15.796 10:45:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:14:15.796 10:45:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:15.796 10:45:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:15.796 10:45:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:15.796 10:45:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:15.796 10:45:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:14:15.796 10:45:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:15.796 10:45:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:14:15.796 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:14:15.796 fio-3.35 00:14:15.796 Starting 1 thread 00:14:22.387 00:14:22.387 test: (groupid=0, jobs=1): err= 0: pid=85414: Tue Oct 8 10:45:42 2024 00:14:22.387 read: IOPS=692, BW=46.0MiB/s (48.2MB/s)(255MiB/5539msec) 00:14:22.387 slat (nsec): min=4034, max=22029, avg=5572.96, stdev=1835.06 00:14:22.387 clat (usec): min=277, max=1535, avg=663.43, stdev=187.85 00:14:22.387 lat (usec): min=281, max=1546, avg=669.00, stdev=188.00 00:14:22.387 clat percentiles (usec): 00:14:22.387 | 1.00th=[ 322], 5.00th=[ 355], 10.00th=[ 400], 20.00th=[ 474], 00:14:22.387 | 30.00th=[ 523], 40.00th=[ 545], 50.00th=[ 742], 60.00th=[ 791], 00:14:22.387 | 70.00th=[ 807], 80.00th=[ 824], 90.00th=[ 857], 95.00th=[ 889], 00:14:22.387 | 99.00th=[ 1020], 99.50th=[ 1074], 99.90th=[ 1270], 99.95th=[ 1532], 00:14:22.387 | 99.99th=[ 1532] 00:14:22.387 write: IOPS=697, BW=46.3MiB/s (48.5MB/s)(256MiB/5531msec); 0 zone resets 00:14:22.387 slat (nsec): min=14474, max=94108, avg=19233.13, stdev=3312.87 00:14:22.387 clat (usec): min=326, max=2042, avg=741.60, stdev=210.45 00:14:22.387 lat (usec): min=340, max=2067, avg=760.84, stdev=210.80 00:14:22.387 clat percentiles (usec): 00:14:22.387 | 1.00th=[ 351], 5.00th=[ 420], 10.00th=[ 478], 20.00th=[ 537], 00:14:22.387 | 30.00th=[ 578], 40.00th=[ 635], 50.00th=[ 832], 60.00th=[ 865], 00:14:22.387 | 70.00th=[ 889], 80.00th=[ 906], 90.00th=[ 930], 95.00th=[ 979], 00:14:22.387 | 99.00th=[ 1254], 99.50th=[ 1565], 99.90th=[ 1729], 99.95th=[ 1876], 00:14:22.387 | 99.99th=[ 2040] 00:14:22.387 bw ( KiB/s): min=38488, max=67320, per=100.00%, avg=47451.64, stdev=10824.86, samples=11 00:14:22.387 iops : min= 566, max= 990, avg=697.82, stdev=159.19, samples=11 00:14:22.387 lat (usec) : 500=19.96%, 750=27.56%, 1000=49.84% 00:14:22.387 lat (msec) : 2=2.63%, 4=0.01% 00:14:22.387 cpu : usr=99.28%, sys=0.05%, ctx=12, majf=0, minf=1181 00:14:22.387 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:22.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.387 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.387 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:22.387 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:22.387 00:14:22.387 Run status group 0 (all jobs): 00:14:22.387 READ: bw=46.0MiB/s (48.2MB/s), 46.0MiB/s-46.0MiB/s (48.2MB/s-48.2MB/s), io=255MiB (267MB), run=5539-5539msec 00:14:22.387 WRITE: bw=46.3MiB/s (48.5MB/s), 46.3MiB/s-46.3MiB/s (48.5MB/s-48.5MB/s), io=256MiB (269MB), run=5531-5531msec 00:14:22.648 ----------------------------------------------------- 00:14:22.648 Suppressions used: 00:14:22.648 count bytes template 00:14:22.648 1 5 /usr/src/fio/parse.c 00:14:22.648 1 8 libtcmalloc_minimal.so 00:14:22.649 1 904 libcrypto.so 00:14:22.649 ----------------------------------------------------- 00:14:22.649 00:14:22.649 10:45:43 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:14:22.649 10:45:43 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:22.649 10:45:43 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:14:22.910 10:45:43 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:14:22.910 10:45:43 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:14:22.910 10:45:43 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:22.910 10:45:43 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:14:22.910 10:45:43 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:22.910 10:45:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:22.910 10:45:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:22.910 10:45:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:22.910 10:45:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:22.910 10:45:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:22.910 10:45:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:14:22.910 10:45:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:22.910 10:45:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:22.910 10:45:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:22.910 10:45:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:14:22.910 10:45:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:22.910 10:45:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:22.910 10:45:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:22.910 10:45:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:14:22.910 10:45:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:22.910 10:45:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:14:22.910 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:14:22.910 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:14:22.910 fio-3.35 00:14:22.910 Starting 2 threads 00:14:49.467 00:14:49.467 first_half: (groupid=0, jobs=1): err= 0: pid=85522: Tue Oct 8 10:46:07 2024 00:14:49.467 read: IOPS=2893, BW=11.3MiB/s (11.9MB/s)(255MiB/22572msec) 00:14:49.467 slat (nsec): min=3078, max=60182, avg=4233.77, stdev=1085.83 00:14:49.467 clat (usec): min=652, max=413461, avg=35769.73, stdev=17366.23 00:14:49.467 lat (usec): min=656, max=413465, avg=35773.96, stdev=17366.38 00:14:49.467 clat percentiles (msec): 00:14:49.467 | 1.00th=[ 13], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 30], 00:14:49.467 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 33], 00:14:49.467 | 70.00th=[ 36], 80.00th=[ 39], 90.00th=[ 43], 95.00th=[ 52], 00:14:49.467 | 99.00th=[ 126], 99.50th=[ 146], 99.90th=[ 163], 99.95th=[ 326], 00:14:49.467 | 99.99th=[ 401] 00:14:49.467 write: IOPS=3572, BW=14.0MiB/s (14.6MB/s)(256MiB/18344msec); 0 zone resets 00:14:49.467 slat (usec): min=3, max=2354, avg= 5.85, stdev=13.90 00:14:49.467 clat (usec): min=379, max=78151, avg=8421.99, stdev=13115.78 00:14:49.467 lat (usec): min=393, max=78156, avg=8427.84, stdev=13115.95 00:14:49.467 clat percentiles (usec): 00:14:49.467 | 1.00th=[ 676], 5.00th=[ 775], 10.00th=[ 889], 20.00th=[ 1254], 00:14:49.467 | 30.00th=[ 2638], 40.00th=[ 3425], 50.00th=[ 4424], 60.00th=[ 5276], 00:14:49.467 | 70.00th=[ 5932], 80.00th=[11207], 90.00th=[16909], 95.00th=[30540], 00:14:49.467 | 99.00th=[64750], 99.50th=[65799], 99.90th=[76022], 99.95th=[77071], 00:14:49.467 | 99.99th=[77071] 00:14:49.467 bw ( KiB/s): min= 2144, max=40600, per=96.37%, avg=24966.10, stdev=13787.11, samples=21 00:14:49.467 iops : min= 536, max=10150, avg=6241.52, stdev=3446.78, samples=21 00:14:49.467 lat (usec) : 500=0.02%, 750=1.95%, 1000=4.82% 00:14:49.467 lat (msec) : 2=5.83%, 4=10.54%, 10=16.02%, 20=7.79%, 50=48.18% 00:14:49.467 lat (msec) : 100=4.04%, 250=0.78%, 500=0.04% 00:14:49.467 cpu : usr=99.27%, sys=0.16%, ctx=36, majf=0, minf=5547 00:14:49.467 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:14:49.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:49.467 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:49.467 issued rwts: total=65310,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:49.467 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:49.467 second_half: (groupid=0, jobs=1): err= 0: pid=85523: Tue Oct 8 10:46:07 2024 00:14:49.467 read: IOPS=2855, BW=11.2MiB/s (11.7MB/s)(255MiB/22871msec) 00:14:49.467 slat (nsec): min=3033, max=25389, avg=5004.41, stdev=918.55 00:14:49.467 clat (usec): min=691, max=433822, avg=35371.49, stdev=19716.67 00:14:49.467 lat (usec): min=698, max=433828, avg=35376.49, stdev=19716.74 00:14:49.467 clat percentiles (msec): 00:14:49.467 | 1.00th=[ 13], 5.00th=[ 27], 10.00th=[ 30], 20.00th=[ 30], 00:14:49.467 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:14:49.467 | 70.00th=[ 35], 80.00th=[ 37], 90.00th=[ 41], 95.00th=[ 48], 00:14:49.467 | 99.00th=[ 138], 99.50th=[ 150], 99.90th=[ 211], 99.95th=[ 313], 00:14:49.467 | 99.99th=[ 426] 00:14:49.467 write: IOPS=3238, BW=12.6MiB/s (13.3MB/s)(256MiB/20237msec); 0 zone resets 00:14:49.467 slat (usec): min=3, max=2291, avg= 6.62, stdev=11.11 00:14:49.467 clat (usec): min=359, max=78658, avg=9400.01, stdev=14342.25 00:14:49.467 lat (usec): min=367, max=78665, avg=9406.64, stdev=14342.35 00:14:49.467 clat percentiles (usec): 00:14:49.467 | 1.00th=[ 668], 5.00th=[ 775], 10.00th=[ 857], 20.00th=[ 1074], 00:14:49.467 | 30.00th=[ 1418], 40.00th=[ 2737], 50.00th=[ 3949], 60.00th=[ 5342], 00:14:49.467 | 70.00th=[ 6718], 80.00th=[13435], 90.00th=[27395], 95.00th=[38011], 00:14:49.467 | 99.00th=[65274], 99.50th=[66847], 99.90th=[76022], 99.95th=[76022], 00:14:49.467 | 99.99th=[78119] 00:14:49.467 bw ( KiB/s): min= 32, max=65376, per=80.95%, avg=20971.52, stdev=19569.75, samples=25 00:14:49.467 iops : min= 8, max=16344, avg=5242.88, stdev=4892.44, samples=25 00:14:49.467 lat (usec) : 500=0.04%, 750=1.83%, 1000=6.73% 00:14:49.467 lat (msec) : 2=8.79%, 4=7.98%, 10=11.74%, 20=8.91%, 50=49.38% 00:14:49.467 lat (msec) : 100=3.52%, 250=1.05%, 500=0.04% 00:14:49.467 cpu : usr=99.26%, sys=0.11%, ctx=69, majf=0, minf=5585 00:14:49.467 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:14:49.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:49.467 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:49.467 issued rwts: total=65312,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:49.467 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:49.467 00:14:49.467 Run status group 0 (all jobs): 00:14:49.467 READ: bw=22.3MiB/s (23.4MB/s), 11.2MiB/s-11.3MiB/s (11.7MB/s-11.9MB/s), io=510MiB (535MB), run=22572-22871msec 00:14:49.467 WRITE: bw=25.3MiB/s (26.5MB/s), 12.6MiB/s-14.0MiB/s (13.3MB/s-14.6MB/s), io=512MiB (537MB), run=18344-20237msec 00:14:49.467 ----------------------------------------------------- 00:14:49.467 Suppressions used: 00:14:49.467 count bytes template 00:14:49.467 2 10 /usr/src/fio/parse.c 00:14:49.467 4 384 /usr/src/fio/iolog.c 00:14:49.467 1 8 libtcmalloc_minimal.so 00:14:49.467 1 904 libcrypto.so 00:14:49.467 ----------------------------------------------------- 00:14:49.467 00:14:49.467 10:46:08 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:14:49.467 10:46:08 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:49.467 10:46:08 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:14:49.467 10:46:08 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:14:49.467 10:46:08 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:14:49.467 10:46:08 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:49.467 10:46:08 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:14:49.467 10:46:08 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:14:49.467 10:46:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:14:49.468 10:46:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:49.468 10:46:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:49.468 10:46:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:49.468 10:46:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:49.468 10:46:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:14:49.468 10:46:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:49.468 10:46:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:49.468 10:46:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:14:49.468 10:46:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:49.468 10:46:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:49.468 10:46:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:49.468 10:46:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:49.468 10:46:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:14:49.468 10:46:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:49.468 10:46:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:14:49.468 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:14:49.468 fio-3.35 00:14:49.468 Starting 1 thread 00:15:04.405 00:15:04.406 test: (groupid=0, jobs=1): err= 0: pid=85814: Tue Oct 8 10:46:24 2024 00:15:04.406 read: IOPS=7436, BW=29.0MiB/s (30.5MB/s)(255MiB/8768msec) 00:15:04.406 slat (nsec): min=3103, max=35827, avg=5439.50, stdev=1923.75 00:15:04.406 clat (usec): min=1181, max=33305, avg=17202.45, stdev=3004.39 00:15:04.406 lat (usec): min=1198, max=33312, avg=17207.89, stdev=3005.16 00:15:04.406 clat percentiles (usec): 00:15:04.406 | 1.00th=[13960], 5.00th=[14222], 10.00th=[15270], 20.00th=[15664], 00:15:04.406 | 30.00th=[15795], 40.00th=[15926], 50.00th=[16188], 60.00th=[16319], 00:15:04.406 | 70.00th=[16581], 80.00th=[18744], 90.00th=[21103], 95.00th=[23462], 00:15:04.406 | 99.00th=[29492], 99.50th=[30540], 99.90th=[31851], 99.95th=[32113], 00:15:04.406 | 99.99th=[32637] 00:15:04.406 write: IOPS=9147, BW=35.7MiB/s (37.5MB/s)(256MiB/7164msec); 0 zone resets 00:15:04.406 slat (usec): min=4, max=1387, avg= 8.98, stdev= 9.57 00:15:04.406 clat (usec): min=632, max=91466, avg=13917.30, stdev=17691.28 00:15:04.406 lat (usec): min=664, max=91481, avg=13926.28, stdev=17691.52 00:15:04.406 clat percentiles (usec): 00:15:04.406 | 1.00th=[ 1172], 5.00th=[ 1549], 10.00th=[ 1762], 20.00th=[ 2040], 00:15:04.406 | 30.00th=[ 2376], 40.00th=[ 3359], 50.00th=[ 7701], 60.00th=[10028], 00:15:04.406 | 70.00th=[12518], 80.00th=[15533], 90.00th=[51643], 95.00th=[56361], 00:15:04.406 | 99.00th=[61080], 99.50th=[63177], 99.90th=[68682], 99.95th=[76022], 00:15:04.406 | 99.99th=[86508] 00:15:04.406 bw ( KiB/s): min= 8376, max=60720, per=95.52%, avg=34952.53, stdev=11599.00, samples=15 00:15:04.406 iops : min= 2094, max=15180, avg=8738.13, stdev=2899.75, samples=15 00:15:04.406 lat (usec) : 750=0.01%, 1000=0.17% 00:15:04.406 lat (msec) : 2=9.07%, 4=11.47%, 10=9.27%, 20=54.81%, 50=9.78% 00:15:04.406 lat (msec) : 100=5.42% 00:15:04.406 cpu : usr=98.76%, sys=0.22%, ctx=34, majf=0, minf=5577 00:15:04.406 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:04.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:04.406 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:04.406 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:04.406 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:04.406 00:15:04.406 Run status group 0 (all jobs): 00:15:04.406 READ: bw=29.0MiB/s (30.5MB/s), 29.0MiB/s-29.0MiB/s (30.5MB/s-30.5MB/s), io=255MiB (267MB), run=8768-8768msec 00:15:04.406 WRITE: bw=35.7MiB/s (37.5MB/s), 35.7MiB/s-35.7MiB/s (37.5MB/s-37.5MB/s), io=256MiB (268MB), run=7164-7164msec 00:15:05.348 ----------------------------------------------------- 00:15:05.348 Suppressions used: 00:15:05.348 count bytes template 00:15:05.348 1 5 /usr/src/fio/parse.c 00:15:05.348 2 192 /usr/src/fio/iolog.c 00:15:05.348 1 8 libtcmalloc_minimal.so 00:15:05.348 1 904 libcrypto.so 00:15:05.348 ----------------------------------------------------- 00:15:05.348 00:15:05.348 10:46:25 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:15:05.348 10:46:25 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:05.348 10:46:25 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:05.348 10:46:25 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:05.348 Remove shared memory files 00:15:05.348 10:46:25 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:15:05.348 10:46:25 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:15:05.348 10:46:25 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:15:05.348 10:46:25 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:15:05.348 10:46:25 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid70763 /dev/shm/spdk_tgt_trace.pid84175 00:15:05.348 10:46:25 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:15:05.348 10:46:25 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:15:05.348 ************************************ 00:15:05.348 END TEST ftl_fio_basic 00:15:05.348 ************************************ 00:15:05.348 00:15:05.348 real 1m0.017s 00:15:05.348 user 2m11.138s 00:15:05.348 sys 0m2.661s 00:15:05.348 10:46:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:05.348 10:46:25 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:05.610 10:46:25 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:15:05.610 10:46:25 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:05.610 10:46:25 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:05.610 10:46:25 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:05.610 ************************************ 00:15:05.610 START TEST ftl_bdevperf 00:15:05.610 ************************************ 00:15:05.610 10:46:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:15:05.610 * Looking for test storage... 00:15:05.610 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:05.610 10:46:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:05.610 10:46:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:15:05.610 10:46:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:05.610 10:46:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:05.610 10:46:26 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:05.610 10:46:26 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:05.610 10:46:26 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:05.610 10:46:26 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:15:05.610 10:46:26 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:15:05.610 10:46:26 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:15:05.610 10:46:26 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:15:05.610 10:46:26 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:15:05.610 10:46:26 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:15:05.610 10:46:26 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:15:05.610 10:46:26 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:05.610 10:46:26 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:15:05.610 10:46:26 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:15:05.610 10:46:26 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:05.610 10:46:26 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:05.610 10:46:26 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:15:05.610 10:46:26 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:15:05.610 10:46:26 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:05.610 10:46:26 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:15:05.610 10:46:26 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:05.610 10:46:26 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:15:05.610 10:46:26 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:15:05.610 10:46:26 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:05.610 10:46:26 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:15:05.610 10:46:26 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:05.610 10:46:26 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:05.610 10:46:26 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:05.610 10:46:26 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:15:05.610 10:46:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:05.610 10:46:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:05.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.610 --rc genhtml_branch_coverage=1 00:15:05.610 --rc genhtml_function_coverage=1 00:15:05.610 --rc genhtml_legend=1 00:15:05.610 --rc geninfo_all_blocks=1 00:15:05.610 --rc geninfo_unexecuted_blocks=1 00:15:05.610 00:15:05.610 ' 00:15:05.610 10:46:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:05.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.610 --rc genhtml_branch_coverage=1 00:15:05.610 --rc genhtml_function_coverage=1 00:15:05.610 --rc genhtml_legend=1 00:15:05.610 --rc geninfo_all_blocks=1 00:15:05.610 --rc geninfo_unexecuted_blocks=1 00:15:05.610 00:15:05.610 ' 00:15:05.610 10:46:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:05.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.610 --rc genhtml_branch_coverage=1 00:15:05.610 --rc genhtml_function_coverage=1 00:15:05.610 --rc genhtml_legend=1 00:15:05.610 --rc geninfo_all_blocks=1 00:15:05.610 --rc geninfo_unexecuted_blocks=1 00:15:05.610 00:15:05.610 ' 00:15:05.610 10:46:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:05.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:05.610 --rc genhtml_branch_coverage=1 00:15:05.610 --rc genhtml_function_coverage=1 00:15:05.610 --rc genhtml_legend=1 00:15:05.610 --rc geninfo_all_blocks=1 00:15:05.610 --rc geninfo_unexecuted_blocks=1 00:15:05.610 00:15:05.610 ' 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=86077 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 86077 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 86077 ']' 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:05.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:05.611 10:46:26 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:05.611 [2024-10-08 10:46:26.165145] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:15:05.611 [2024-10-08 10:46:26.165390] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86077 ] 00:15:05.872 [2024-10-08 10:46:26.294440] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:05.872 [2024-10-08 10:46:26.314029] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.872 [2024-10-08 10:46:26.356429] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.444 10:46:27 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:06.444 10:46:27 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:15:06.444 10:46:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:06.444 10:46:27 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:15:06.444 10:46:27 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:06.444 10:46:27 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:15:06.444 10:46:27 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:15:06.444 10:46:27 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:07.017 10:46:27 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:07.017 10:46:27 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:15:07.017 10:46:27 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:07.017 10:46:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:15:07.017 10:46:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:07.017 10:46:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:15:07.017 10:46:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:15:07.017 10:46:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:07.017 10:46:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:07.017 { 00:15:07.017 "name": "nvme0n1", 00:15:07.017 "aliases": [ 00:15:07.017 "e3135ca1-c6fb-472d-bcf0-4638312b5689" 00:15:07.017 ], 00:15:07.017 "product_name": "NVMe disk", 00:15:07.017 "block_size": 4096, 00:15:07.017 "num_blocks": 1310720, 00:15:07.017 "uuid": "e3135ca1-c6fb-472d-bcf0-4638312b5689", 00:15:07.017 "numa_id": -1, 00:15:07.017 "assigned_rate_limits": { 00:15:07.017 "rw_ios_per_sec": 0, 00:15:07.017 "rw_mbytes_per_sec": 0, 00:15:07.017 "r_mbytes_per_sec": 0, 00:15:07.017 "w_mbytes_per_sec": 0 00:15:07.017 }, 00:15:07.017 "claimed": true, 00:15:07.017 "claim_type": "read_many_write_one", 00:15:07.017 "zoned": false, 00:15:07.017 "supported_io_types": { 00:15:07.017 "read": true, 00:15:07.017 "write": true, 00:15:07.017 "unmap": true, 00:15:07.017 "flush": true, 00:15:07.017 "reset": true, 00:15:07.017 "nvme_admin": true, 00:15:07.017 "nvme_io": true, 00:15:07.017 "nvme_io_md": false, 00:15:07.017 "write_zeroes": true, 00:15:07.017 "zcopy": false, 00:15:07.017 "get_zone_info": false, 00:15:07.017 "zone_management": false, 00:15:07.017 "zone_append": false, 00:15:07.017 "compare": true, 00:15:07.017 "compare_and_write": false, 00:15:07.017 "abort": true, 00:15:07.017 "seek_hole": false, 00:15:07.017 "seek_data": false, 00:15:07.017 "copy": true, 00:15:07.017 "nvme_iov_md": false 00:15:07.017 }, 00:15:07.017 "driver_specific": { 00:15:07.017 "nvme": [ 00:15:07.017 { 00:15:07.017 "pci_address": "0000:00:11.0", 00:15:07.017 "trid": { 00:15:07.017 "trtype": "PCIe", 00:15:07.017 "traddr": "0000:00:11.0" 00:15:07.017 }, 00:15:07.017 "ctrlr_data": { 00:15:07.017 "cntlid": 0, 00:15:07.017 "vendor_id": "0x1b36", 00:15:07.017 "model_number": "QEMU NVMe Ctrl", 00:15:07.017 "serial_number": "12341", 00:15:07.017 "firmware_revision": "8.0.0", 00:15:07.017 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:07.017 "oacs": { 00:15:07.017 "security": 0, 00:15:07.017 "format": 1, 00:15:07.017 "firmware": 0, 00:15:07.017 "ns_manage": 1 00:15:07.017 }, 00:15:07.017 "multi_ctrlr": false, 00:15:07.017 "ana_reporting": false 00:15:07.017 }, 00:15:07.017 "vs": { 00:15:07.017 "nvme_version": "1.4" 00:15:07.017 }, 00:15:07.017 "ns_data": { 00:15:07.017 "id": 1, 00:15:07.017 "can_share": false 00:15:07.017 } 00:15:07.017 } 00:15:07.017 ], 00:15:07.017 "mp_policy": "active_passive" 00:15:07.017 } 00:15:07.017 } 00:15:07.017 ]' 00:15:07.017 10:46:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:07.017 10:46:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:15:07.017 10:46:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:07.017 10:46:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:15:07.017 10:46:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:15:07.017 10:46:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:15:07.017 10:46:27 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:15:07.017 10:46:27 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:07.017 10:46:27 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:15:07.017 10:46:27 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:07.017 10:46:27 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:07.278 10:46:27 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=1c16ce69-477d-4a83-8d7f-00dc7b57d129 00:15:07.278 10:46:27 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:15:07.278 10:46:27 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1c16ce69-477d-4a83-8d7f-00dc7b57d129 00:15:07.539 10:46:27 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:07.798 10:46:28 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=3f1a8d52-6327-4200-aa9f-029210b9d1aa 00:15:07.798 10:46:28 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3f1a8d52-6327-4200-aa9f-029210b9d1aa 00:15:08.058 10:46:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=7eed989f-ec59-4599-898f-234c74a23588 00:15:08.058 10:46:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7eed989f-ec59-4599-898f-234c74a23588 00:15:08.058 10:46:28 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:15:08.058 10:46:28 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:08.058 10:46:28 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=7eed989f-ec59-4599-898f-234c74a23588 00:15:08.058 10:46:28 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:15:08.058 10:46:28 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 7eed989f-ec59-4599-898f-234c74a23588 00:15:08.058 10:46:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=7eed989f-ec59-4599-898f-234c74a23588 00:15:08.058 10:46:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:08.058 10:46:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:15:08.058 10:46:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:15:08.058 10:46:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7eed989f-ec59-4599-898f-234c74a23588 00:15:08.058 10:46:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:08.058 { 00:15:08.058 "name": "7eed989f-ec59-4599-898f-234c74a23588", 00:15:08.058 "aliases": [ 00:15:08.058 "lvs/nvme0n1p0" 00:15:08.058 ], 00:15:08.058 "product_name": "Logical Volume", 00:15:08.058 "block_size": 4096, 00:15:08.058 "num_blocks": 26476544, 00:15:08.058 "uuid": "7eed989f-ec59-4599-898f-234c74a23588", 00:15:08.058 "assigned_rate_limits": { 00:15:08.058 "rw_ios_per_sec": 0, 00:15:08.058 "rw_mbytes_per_sec": 0, 00:15:08.058 "r_mbytes_per_sec": 0, 00:15:08.058 "w_mbytes_per_sec": 0 00:15:08.058 }, 00:15:08.058 "claimed": false, 00:15:08.058 "zoned": false, 00:15:08.058 "supported_io_types": { 00:15:08.058 "read": true, 00:15:08.058 "write": true, 00:15:08.058 "unmap": true, 00:15:08.058 "flush": false, 00:15:08.058 "reset": true, 00:15:08.058 "nvme_admin": false, 00:15:08.058 "nvme_io": false, 00:15:08.058 "nvme_io_md": false, 00:15:08.058 "write_zeroes": true, 00:15:08.058 "zcopy": false, 00:15:08.058 "get_zone_info": false, 00:15:08.058 "zone_management": false, 00:15:08.058 "zone_append": false, 00:15:08.058 "compare": false, 00:15:08.058 "compare_and_write": false, 00:15:08.058 "abort": false, 00:15:08.058 "seek_hole": true, 00:15:08.058 "seek_data": true, 00:15:08.058 "copy": false, 00:15:08.058 "nvme_iov_md": false 00:15:08.058 }, 00:15:08.058 "driver_specific": { 00:15:08.059 "lvol": { 00:15:08.059 "lvol_store_uuid": "3f1a8d52-6327-4200-aa9f-029210b9d1aa", 00:15:08.059 "base_bdev": "nvme0n1", 00:15:08.059 "thin_provision": true, 00:15:08.059 "num_allocated_clusters": 0, 00:15:08.059 "snapshot": false, 00:15:08.059 "clone": false, 00:15:08.059 "esnap_clone": false 00:15:08.059 } 00:15:08.059 } 00:15:08.059 } 00:15:08.059 ]' 00:15:08.059 10:46:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:08.059 10:46:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:15:08.059 10:46:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:08.318 10:46:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:08.318 10:46:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:08.318 10:46:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:15:08.318 10:46:28 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:15:08.318 10:46:28 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:15:08.318 10:46:28 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:08.578 10:46:28 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:08.578 10:46:28 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:08.578 10:46:28 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 7eed989f-ec59-4599-898f-234c74a23588 00:15:08.578 10:46:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=7eed989f-ec59-4599-898f-234c74a23588 00:15:08.579 10:46:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:08.579 10:46:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:15:08.579 10:46:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:15:08.579 10:46:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7eed989f-ec59-4599-898f-234c74a23588 00:15:08.579 10:46:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:08.579 { 00:15:08.579 "name": "7eed989f-ec59-4599-898f-234c74a23588", 00:15:08.579 "aliases": [ 00:15:08.579 "lvs/nvme0n1p0" 00:15:08.579 ], 00:15:08.579 "product_name": "Logical Volume", 00:15:08.579 "block_size": 4096, 00:15:08.579 "num_blocks": 26476544, 00:15:08.579 "uuid": "7eed989f-ec59-4599-898f-234c74a23588", 00:15:08.579 "assigned_rate_limits": { 00:15:08.579 "rw_ios_per_sec": 0, 00:15:08.579 "rw_mbytes_per_sec": 0, 00:15:08.579 "r_mbytes_per_sec": 0, 00:15:08.579 "w_mbytes_per_sec": 0 00:15:08.579 }, 00:15:08.579 "claimed": false, 00:15:08.579 "zoned": false, 00:15:08.579 "supported_io_types": { 00:15:08.579 "read": true, 00:15:08.579 "write": true, 00:15:08.579 "unmap": true, 00:15:08.579 "flush": false, 00:15:08.579 "reset": true, 00:15:08.579 "nvme_admin": false, 00:15:08.579 "nvme_io": false, 00:15:08.579 "nvme_io_md": false, 00:15:08.579 "write_zeroes": true, 00:15:08.579 "zcopy": false, 00:15:08.579 "get_zone_info": false, 00:15:08.579 "zone_management": false, 00:15:08.579 "zone_append": false, 00:15:08.579 "compare": false, 00:15:08.579 "compare_and_write": false, 00:15:08.579 "abort": false, 00:15:08.579 "seek_hole": true, 00:15:08.579 "seek_data": true, 00:15:08.579 "copy": false, 00:15:08.579 "nvme_iov_md": false 00:15:08.579 }, 00:15:08.579 "driver_specific": { 00:15:08.579 "lvol": { 00:15:08.579 "lvol_store_uuid": "3f1a8d52-6327-4200-aa9f-029210b9d1aa", 00:15:08.579 "base_bdev": "nvme0n1", 00:15:08.579 "thin_provision": true, 00:15:08.579 "num_allocated_clusters": 0, 00:15:08.579 "snapshot": false, 00:15:08.579 "clone": false, 00:15:08.579 "esnap_clone": false 00:15:08.579 } 00:15:08.579 } 00:15:08.579 } 00:15:08.579 ]' 00:15:08.579 10:46:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:08.579 10:46:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:15:08.579 10:46:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:08.840 10:46:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:08.840 10:46:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:08.840 10:46:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:15:08.840 10:46:29 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:15:08.840 10:46:29 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:08.840 10:46:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:15:08.840 10:46:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 7eed989f-ec59-4599-898f-234c74a23588 00:15:08.840 10:46:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=7eed989f-ec59-4599-898f-234c74a23588 00:15:08.840 10:46:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:08.840 10:46:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:15:08.840 10:46:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:15:08.840 10:46:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7eed989f-ec59-4599-898f-234c74a23588 00:15:09.102 10:46:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:09.102 { 00:15:09.102 "name": "7eed989f-ec59-4599-898f-234c74a23588", 00:15:09.102 "aliases": [ 00:15:09.102 "lvs/nvme0n1p0" 00:15:09.102 ], 00:15:09.102 "product_name": "Logical Volume", 00:15:09.102 "block_size": 4096, 00:15:09.102 "num_blocks": 26476544, 00:15:09.102 "uuid": "7eed989f-ec59-4599-898f-234c74a23588", 00:15:09.102 "assigned_rate_limits": { 00:15:09.102 "rw_ios_per_sec": 0, 00:15:09.102 "rw_mbytes_per_sec": 0, 00:15:09.102 "r_mbytes_per_sec": 0, 00:15:09.102 "w_mbytes_per_sec": 0 00:15:09.102 }, 00:15:09.102 "claimed": false, 00:15:09.102 "zoned": false, 00:15:09.102 "supported_io_types": { 00:15:09.102 "read": true, 00:15:09.102 "write": true, 00:15:09.102 "unmap": true, 00:15:09.102 "flush": false, 00:15:09.102 "reset": true, 00:15:09.102 "nvme_admin": false, 00:15:09.102 "nvme_io": false, 00:15:09.102 "nvme_io_md": false, 00:15:09.102 "write_zeroes": true, 00:15:09.102 "zcopy": false, 00:15:09.102 "get_zone_info": false, 00:15:09.102 "zone_management": false, 00:15:09.102 "zone_append": false, 00:15:09.102 "compare": false, 00:15:09.102 "compare_and_write": false, 00:15:09.102 "abort": false, 00:15:09.102 "seek_hole": true, 00:15:09.102 "seek_data": true, 00:15:09.102 "copy": false, 00:15:09.102 "nvme_iov_md": false 00:15:09.102 }, 00:15:09.102 "driver_specific": { 00:15:09.102 "lvol": { 00:15:09.102 "lvol_store_uuid": "3f1a8d52-6327-4200-aa9f-029210b9d1aa", 00:15:09.102 "base_bdev": "nvme0n1", 00:15:09.102 "thin_provision": true, 00:15:09.102 "num_allocated_clusters": 0, 00:15:09.102 "snapshot": false, 00:15:09.102 "clone": false, 00:15:09.102 "esnap_clone": false 00:15:09.102 } 00:15:09.102 } 00:15:09.102 } 00:15:09.102 ]' 00:15:09.102 10:46:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:09.102 10:46:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:15:09.102 10:46:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:09.102 10:46:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:09.102 10:46:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:09.102 10:46:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:15:09.102 10:46:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:15:09.102 10:46:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7eed989f-ec59-4599-898f-234c74a23588 -c nvc0n1p0 --l2p_dram_limit 20 00:15:09.365 [2024-10-08 10:46:29.825551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.365 [2024-10-08 10:46:29.825701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:09.365 [2024-10-08 10:46:29.825720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:09.365 [2024-10-08 10:46:29.825729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.365 [2024-10-08 10:46:29.825768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.365 [2024-10-08 10:46:29.825780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:09.365 [2024-10-08 10:46:29.825791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:15:09.365 [2024-10-08 10:46:29.825814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.365 [2024-10-08 10:46:29.825830] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:09.365 [2024-10-08 10:46:29.826023] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:09.365 [2024-10-08 10:46:29.826037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.365 [2024-10-08 10:46:29.826048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:09.365 [2024-10-08 10:46:29.826055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:15:09.365 [2024-10-08 10:46:29.826063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.365 [2024-10-08 10:46:29.826111] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4b60e38a-37ec-40ca-bf08-3335c18144d1 00:15:09.365 [2024-10-08 10:46:29.827404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.365 [2024-10-08 10:46:29.827428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:09.365 [2024-10-08 10:46:29.827438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:15:09.365 [2024-10-08 10:46:29.827445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.365 [2024-10-08 10:46:29.834425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.365 [2024-10-08 10:46:29.834453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:09.365 [2024-10-08 10:46:29.834464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.951 ms 00:15:09.365 [2024-10-08 10:46:29.834470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.365 [2024-10-08 10:46:29.834540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.365 [2024-10-08 10:46:29.834547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:09.365 [2024-10-08 10:46:29.834556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:15:09.365 [2024-10-08 10:46:29.834561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.365 [2024-10-08 10:46:29.834603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.365 [2024-10-08 10:46:29.834611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:09.365 [2024-10-08 10:46:29.834619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:15:09.365 [2024-10-08 10:46:29.834625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.365 [2024-10-08 10:46:29.834643] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:09.365 [2024-10-08 10:46:29.836300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.365 [2024-10-08 10:46:29.836414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:09.365 [2024-10-08 10:46:29.836427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.663 ms 00:15:09.365 [2024-10-08 10:46:29.836435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.365 [2024-10-08 10:46:29.836465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.365 [2024-10-08 10:46:29.836475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:09.365 [2024-10-08 10:46:29.836482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:15:09.365 [2024-10-08 10:46:29.836489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.365 [2024-10-08 10:46:29.836501] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:09.365 [2024-10-08 10:46:29.836620] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:09.365 [2024-10-08 10:46:29.836630] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:09.365 [2024-10-08 10:46:29.836641] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:09.365 [2024-10-08 10:46:29.836650] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:09.365 [2024-10-08 10:46:29.836658] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:09.365 [2024-10-08 10:46:29.836665] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:09.365 [2024-10-08 10:46:29.836674] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:09.365 [2024-10-08 10:46:29.836679] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:09.365 [2024-10-08 10:46:29.836686] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:09.365 [2024-10-08 10:46:29.836693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.365 [2024-10-08 10:46:29.836703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:09.365 [2024-10-08 10:46:29.836713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:15:09.365 [2024-10-08 10:46:29.836722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.365 [2024-10-08 10:46:29.836786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.365 [2024-10-08 10:46:29.836816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:09.365 [2024-10-08 10:46:29.836823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:15:09.365 [2024-10-08 10:46:29.836831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.365 [2024-10-08 10:46:29.836901] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:09.365 [2024-10-08 10:46:29.836911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:09.365 [2024-10-08 10:46:29.836919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:09.365 [2024-10-08 10:46:29.836929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:09.365 [2024-10-08 10:46:29.836935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:09.365 [2024-10-08 10:46:29.836958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:09.365 [2024-10-08 10:46:29.836970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:09.365 [2024-10-08 10:46:29.836978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:09.365 [2024-10-08 10:46:29.836984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:09.365 [2024-10-08 10:46:29.836991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:09.365 [2024-10-08 10:46:29.836998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:09.365 [2024-10-08 10:46:29.837007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:09.365 [2024-10-08 10:46:29.837014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:09.365 [2024-10-08 10:46:29.837022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:09.365 [2024-10-08 10:46:29.837029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:09.365 [2024-10-08 10:46:29.837037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:09.365 [2024-10-08 10:46:29.837043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:09.365 [2024-10-08 10:46:29.837050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:09.365 [2024-10-08 10:46:29.837057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:09.365 [2024-10-08 10:46:29.837066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:09.365 [2024-10-08 10:46:29.837072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:09.365 [2024-10-08 10:46:29.837083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:09.365 [2024-10-08 10:46:29.837089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:09.365 [2024-10-08 10:46:29.837097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:09.365 [2024-10-08 10:46:29.837103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:09.365 [2024-10-08 10:46:29.837110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:09.365 [2024-10-08 10:46:29.837116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:09.365 [2024-10-08 10:46:29.837126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:09.365 [2024-10-08 10:46:29.837132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:09.365 [2024-10-08 10:46:29.837139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:09.365 [2024-10-08 10:46:29.837149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:09.365 [2024-10-08 10:46:29.837157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:09.365 [2024-10-08 10:46:29.837163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:09.366 [2024-10-08 10:46:29.837171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:09.366 [2024-10-08 10:46:29.837177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:09.366 [2024-10-08 10:46:29.837185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:09.366 [2024-10-08 10:46:29.837190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:09.366 [2024-10-08 10:46:29.837198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:09.366 [2024-10-08 10:46:29.837204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:09.366 [2024-10-08 10:46:29.837211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:09.366 [2024-10-08 10:46:29.837217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:09.366 [2024-10-08 10:46:29.837225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:09.366 [2024-10-08 10:46:29.837230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:09.366 [2024-10-08 10:46:29.837240] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:09.366 [2024-10-08 10:46:29.837250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:09.366 [2024-10-08 10:46:29.837258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:09.366 [2024-10-08 10:46:29.837264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:09.366 [2024-10-08 10:46:29.837272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:09.366 [2024-10-08 10:46:29.837278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:09.366 [2024-10-08 10:46:29.837285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:09.366 [2024-10-08 10:46:29.837291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:09.366 [2024-10-08 10:46:29.837299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:09.366 [2024-10-08 10:46:29.837305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:09.366 [2024-10-08 10:46:29.837316] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:09.366 [2024-10-08 10:46:29.837325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:09.366 [2024-10-08 10:46:29.837334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:09.366 [2024-10-08 10:46:29.837341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:09.366 [2024-10-08 10:46:29.837350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:09.366 [2024-10-08 10:46:29.837356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:09.366 [2024-10-08 10:46:29.837366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:09.366 [2024-10-08 10:46:29.837371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:09.366 [2024-10-08 10:46:29.837379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:09.366 [2024-10-08 10:46:29.837385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:09.366 [2024-10-08 10:46:29.837392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:09.366 [2024-10-08 10:46:29.837397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:09.366 [2024-10-08 10:46:29.837403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:09.366 [2024-10-08 10:46:29.837409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:09.366 [2024-10-08 10:46:29.837415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:09.366 [2024-10-08 10:46:29.837421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:09.366 [2024-10-08 10:46:29.837428] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:09.366 [2024-10-08 10:46:29.837435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:09.366 [2024-10-08 10:46:29.837443] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:09.366 [2024-10-08 10:46:29.837450] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:09.366 [2024-10-08 10:46:29.837457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:09.366 [2024-10-08 10:46:29.837462] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:09.366 [2024-10-08 10:46:29.837472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.366 [2024-10-08 10:46:29.837482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:09.366 [2024-10-08 10:46:29.837489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.620 ms 00:15:09.366 [2024-10-08 10:46:29.837495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.366 [2024-10-08 10:46:29.837520] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:09.366 [2024-10-08 10:46:29.837528] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:13.574 [2024-10-08 10:46:33.377666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.574 [2024-10-08 10:46:33.377752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:13.574 [2024-10-08 10:46:33.377780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3540.125 ms 00:15:13.574 [2024-10-08 10:46:33.377791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.574 [2024-10-08 10:46:33.408319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.574 [2024-10-08 10:46:33.408407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:13.574 [2024-10-08 10:46:33.408439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.357 ms 00:15:13.574 [2024-10-08 10:46:33.408453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.574 [2024-10-08 10:46:33.408685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.574 [2024-10-08 10:46:33.408707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:13.574 [2024-10-08 10:46:33.408726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:15:13.574 [2024-10-08 10:46:33.408739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.574 [2024-10-08 10:46:33.425618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.574 [2024-10-08 10:46:33.425672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:13.574 [2024-10-08 10:46:33.425693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.816 ms 00:15:13.574 [2024-10-08 10:46:33.425707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.574 [2024-10-08 10:46:33.425745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.574 [2024-10-08 10:46:33.425755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:13.574 [2024-10-08 10:46:33.425767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:13.574 [2024-10-08 10:46:33.425778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.574 [2024-10-08 10:46:33.426508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.575 [2024-10-08 10:46:33.426566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:13.575 [2024-10-08 10:46:33.426585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.654 ms 00:15:13.575 [2024-10-08 10:46:33.426600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.575 [2024-10-08 10:46:33.426734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.575 [2024-10-08 10:46:33.426746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:13.575 [2024-10-08 10:46:33.426762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:15:13.575 [2024-10-08 10:46:33.426771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.575 [2024-10-08 10:46:33.436395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.575 [2024-10-08 10:46:33.436443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:13.575 [2024-10-08 10:46:33.436458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.595 ms 00:15:13.575 [2024-10-08 10:46:33.436474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.575 [2024-10-08 10:46:33.448010] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:15:13.575 [2024-10-08 10:46:33.457282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.575 [2024-10-08 10:46:33.457336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:13.575 [2024-10-08 10:46:33.457348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.718 ms 00:15:13.575 [2024-10-08 10:46:33.457360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.575 [2024-10-08 10:46:33.541650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.575 [2024-10-08 10:46:33.541716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:13.575 [2024-10-08 10:46:33.541729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.261 ms 00:15:13.575 [2024-10-08 10:46:33.541742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.575 [2024-10-08 10:46:33.541984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.575 [2024-10-08 10:46:33.542006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:13.575 [2024-10-08 10:46:33.542016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:15:13.575 [2024-10-08 10:46:33.542028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.575 [2024-10-08 10:46:33.547984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.575 [2024-10-08 10:46:33.548042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:13.575 [2024-10-08 10:46:33.548054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.920 ms 00:15:13.575 [2024-10-08 10:46:33.548066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.575 [2024-10-08 10:46:33.553475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.575 [2024-10-08 10:46:33.553533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:13.575 [2024-10-08 10:46:33.553544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.361 ms 00:15:13.575 [2024-10-08 10:46:33.553555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.575 [2024-10-08 10:46:33.553959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.575 [2024-10-08 10:46:33.553987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:13.575 [2024-10-08 10:46:33.553999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:15:13.575 [2024-10-08 10:46:33.554010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.575 [2024-10-08 10:46:33.602129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.575 [2024-10-08 10:46:33.602190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:13.575 [2024-10-08 10:46:33.602203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.096 ms 00:15:13.575 [2024-10-08 10:46:33.602216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.575 [2024-10-08 10:46:33.610229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.575 [2024-10-08 10:46:33.610287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:13.575 [2024-10-08 10:46:33.610303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.940 ms 00:15:13.575 [2024-10-08 10:46:33.610315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.575 [2024-10-08 10:46:33.616301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.575 [2024-10-08 10:46:33.616357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:13.575 [2024-10-08 10:46:33.616368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.939 ms 00:15:13.575 [2024-10-08 10:46:33.616378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.575 [2024-10-08 10:46:33.622932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.575 [2024-10-08 10:46:33.622991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:13.575 [2024-10-08 10:46:33.623002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.507 ms 00:15:13.575 [2024-10-08 10:46:33.623014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.575 [2024-10-08 10:46:33.623065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.575 [2024-10-08 10:46:33.623079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:13.575 [2024-10-08 10:46:33.623092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:15:13.575 [2024-10-08 10:46:33.623109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.575 [2024-10-08 10:46:33.623220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:13.575 [2024-10-08 10:46:33.623237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:13.575 [2024-10-08 10:46:33.623246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:15:13.575 [2024-10-08 10:46:33.623257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:13.575 [2024-10-08 10:46:33.625846] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3799.679 ms, result 0 00:15:13.575 { 00:15:13.575 "name": "ftl0", 00:15:13.575 "uuid": "4b60e38a-37ec-40ca-bf08-3335c18144d1" 00:15:13.575 } 00:15:13.575 10:46:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:15:13.575 10:46:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:15:13.575 10:46:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:15:13.575 10:46:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:15:13.575 [2024-10-08 10:46:33.948063] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:13.575 I/O size of 69632 is greater than zero copy threshold (65536). 00:15:13.575 Zero copy mechanism will not be used. 00:15:13.575 Running I/O for 4 seconds... 00:15:15.461 705.00 IOPS, 46.82 MiB/s [2024-10-08T10:46:36.981Z] 727.00 IOPS, 48.28 MiB/s [2024-10-08T10:46:38.368Z] 739.33 IOPS, 49.10 MiB/s [2024-10-08T10:46:38.368Z] 747.25 IOPS, 49.62 MiB/s 00:15:17.791 Latency(us) 00:15:17.791 [2024-10-08T10:46:38.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.791 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:15:17.791 ftl0 : 4.00 747.26 49.62 0.00 0.00 1419.13 335.56 2671.85 00:15:17.791 [2024-10-08T10:46:38.368Z] =================================================================================================================== 00:15:17.791 [2024-10-08T10:46:38.368Z] Total : 747.26 49.62 0.00 0.00 1419.13 335.56 2671.85 00:15:17.791 [2024-10-08 10:46:37.955287] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:15:17.791 { 00:15:17.791 "results": [ 00:15:17.791 { 00:15:17.791 "job": "ftl0", 00:15:17.791 "core_mask": "0x1", 00:15:17.791 "workload": "randwrite", 00:15:17.791 "status": "finished", 00:15:17.791 "queue_depth": 1, 00:15:17.791 "io_size": 69632, 00:15:17.791 "runtime": 4.001304, 00:15:17.791 "iops": 747.2563944154206, 00:15:17.791 "mibps": 49.62249494164902, 00:15:17.791 "io_failed": 0, 00:15:17.791 "io_timeout": 0, 00:15:17.791 "avg_latency_us": 1419.128590686905, 00:15:17.791 "min_latency_us": 335.55692307692306, 00:15:17.791 "max_latency_us": 2671.8523076923075 00:15:17.791 } 00:15:17.791 ], 00:15:17.791 "core_count": 1 00:15:17.791 } 00:15:17.791 10:46:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:15:17.791 [2024-10-08 10:46:38.065748] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:17.791 Running I/O for 4 seconds... 00:15:19.677 7838.00 IOPS, 30.62 MiB/s [2024-10-08T10:46:41.199Z] 7011.50 IOPS, 27.39 MiB/s [2024-10-08T10:46:42.191Z] 6819.67 IOPS, 26.64 MiB/s [2024-10-08T10:46:42.191Z] 6846.00 IOPS, 26.74 MiB/s 00:15:21.614 Latency(us) 00:15:21.614 [2024-10-08T10:46:42.191Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:21.614 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:15:21.614 ftl0 : 4.03 6826.03 26.66 0.00 0.00 18682.30 359.19 66544.25 00:15:21.614 [2024-10-08T10:46:42.191Z] =================================================================================================================== 00:15:21.614 [2024-10-08T10:46:42.191Z] Total : 6826.03 26.66 0.00 0.00 18682.30 0.00 66544.25 00:15:21.614 [2024-10-08 10:46:42.101039] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:15:21.614 { 00:15:21.614 "results": [ 00:15:21.614 { 00:15:21.614 "job": "ftl0", 00:15:21.614 "core_mask": "0x1", 00:15:21.614 "workload": "randwrite", 00:15:21.614 "status": "finished", 00:15:21.614 "queue_depth": 128, 00:15:21.614 "io_size": 4096, 00:15:21.614 "runtime": 4.02928, 00:15:21.614 "iops": 6826.033435253941, 00:15:21.614 "mibps": 26.664193106460708, 00:15:21.614 "io_failed": 0, 00:15:21.614 "io_timeout": 0, 00:15:21.614 "avg_latency_us": 18682.296276010205, 00:15:21.614 "min_latency_us": 359.1876923076923, 00:15:21.614 "max_latency_us": 66544.24615384615 00:15:21.614 } 00:15:21.614 ], 00:15:21.614 "core_count": 1 00:15:21.614 } 00:15:21.615 10:46:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:15:21.885 [2024-10-08 10:46:42.195623] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:15:21.885 Running I/O for 4 seconds... 00:15:23.774 5280.00 IOPS, 20.62 MiB/s [2024-10-08T10:46:45.293Z] 5174.00 IOPS, 20.21 MiB/s [2024-10-08T10:46:46.239Z] 5184.67 IOPS, 20.25 MiB/s [2024-10-08T10:46:46.239Z] 5064.50 IOPS, 19.78 MiB/s 00:15:25.662 Latency(us) 00:15:25.662 [2024-10-08T10:46:46.239Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.662 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:25.662 Verification LBA range: start 0x0 length 0x1400000 00:15:25.662 ftl0 : 4.02 5077.30 19.83 0.00 0.00 25134.53 368.64 116956.55 00:15:25.662 [2024-10-08T10:46:46.239Z] =================================================================================================================== 00:15:25.662 [2024-10-08T10:46:46.239Z] Total : 5077.30 19.83 0.00 0.00 25134.53 0.00 116956.55 00:15:25.662 [2024-10-08 10:46:46.218791] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:15:25.662 { 00:15:25.662 "results": [ 00:15:25.662 { 00:15:25.662 "job": "ftl0", 00:15:25.662 "core_mask": "0x1", 00:15:25.662 "workload": "verify", 00:15:25.662 "status": "finished", 00:15:25.662 "verify_range": { 00:15:25.662 "start": 0, 00:15:25.662 "length": 20971520 00:15:25.662 }, 00:15:25.662 "queue_depth": 128, 00:15:25.662 "io_size": 4096, 00:15:25.662 "runtime": 4.015123, 00:15:25.662 "iops": 5077.303982966399, 00:15:25.662 "mibps": 19.833218683462498, 00:15:25.662 "io_failed": 0, 00:15:25.662 "io_timeout": 0, 00:15:25.662 "avg_latency_us": 25134.534856953112, 00:15:25.662 "min_latency_us": 368.64, 00:15:25.662 "max_latency_us": 116956.55384615385 00:15:25.662 } 00:15:25.662 ], 00:15:25.662 "core_count": 1 00:15:25.662 } 00:15:25.662 10:46:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:15:25.926 [2024-10-08 10:46:46.431166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.926 [2024-10-08 10:46:46.431223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:25.926 [2024-10-08 10:46:46.431237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:25.926 [2024-10-08 10:46:46.431249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.926 [2024-10-08 10:46:46.431270] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:25.926 [2024-10-08 10:46:46.432011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.926 [2024-10-08 10:46:46.432045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:25.927 [2024-10-08 10:46:46.432060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.720 ms 00:15:25.927 [2024-10-08 10:46:46.432072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:25.927 [2024-10-08 10:46:46.434665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:25.927 [2024-10-08 10:46:46.434709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:25.927 [2024-10-08 10:46:46.434731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.558 ms 00:15:25.927 [2024-10-08 10:46:46.434739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.190 [2024-10-08 10:46:46.657926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.190 [2024-10-08 10:46:46.657985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:26.190 [2024-10-08 10:46:46.658004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 223.157 ms 00:15:26.190 [2024-10-08 10:46:46.658013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.190 [2024-10-08 10:46:46.664471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.190 [2024-10-08 10:46:46.664507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:26.190 [2024-10-08 10:46:46.664523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.411 ms 00:15:26.190 [2024-10-08 10:46:46.664532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.190 [2024-10-08 10:46:46.667395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.190 [2024-10-08 10:46:46.667439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:26.190 [2024-10-08 10:46:46.667452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.794 ms 00:15:26.190 [2024-10-08 10:46:46.667460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.190 [2024-10-08 10:46:46.672818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.190 [2024-10-08 10:46:46.672861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:26.190 [2024-10-08 10:46:46.672881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.291 ms 00:15:26.190 [2024-10-08 10:46:46.672889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.190 [2024-10-08 10:46:46.673052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.190 [2024-10-08 10:46:46.673071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:26.190 [2024-10-08 10:46:46.673086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:15:26.190 [2024-10-08 10:46:46.673094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.190 [2024-10-08 10:46:46.676077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.190 [2024-10-08 10:46:46.676121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:26.190 [2024-10-08 10:46:46.676134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.958 ms 00:15:26.190 [2024-10-08 10:46:46.676141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.190 [2024-10-08 10:46:46.678784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.190 [2024-10-08 10:46:46.678847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:26.191 [2024-10-08 10:46:46.678861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.594 ms 00:15:26.191 [2024-10-08 10:46:46.678870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.191 [2024-10-08 10:46:46.680986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.191 [2024-10-08 10:46:46.681023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:26.191 [2024-10-08 10:46:46.681039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.048 ms 00:15:26.191 [2024-10-08 10:46:46.681046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.191 [2024-10-08 10:46:46.683157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.191 [2024-10-08 10:46:46.683198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:26.191 [2024-10-08 10:46:46.683210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.017 ms 00:15:26.191 [2024-10-08 10:46:46.683216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.191 [2024-10-08 10:46:46.683259] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:26.191 [2024-10-08 10:46:46.683276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:26.191 [2024-10-08 10:46:46.683980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:26.192 [2024-10-08 10:46:46.683987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:26.192 [2024-10-08 10:46:46.683997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:26.192 [2024-10-08 10:46:46.684004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:26.192 [2024-10-08 10:46:46.684014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:26.192 [2024-10-08 10:46:46.684022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:26.192 [2024-10-08 10:46:46.684031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:26.192 [2024-10-08 10:46:46.684038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:26.192 [2024-10-08 10:46:46.684047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:26.192 [2024-10-08 10:46:46.684054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:26.192 [2024-10-08 10:46:46.684072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:26.192 [2024-10-08 10:46:46.684080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:26.192 [2024-10-08 10:46:46.684089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:26.192 [2024-10-08 10:46:46.684096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:26.192 [2024-10-08 10:46:46.684108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:26.192 [2024-10-08 10:46:46.684121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:26.192 [2024-10-08 10:46:46.684132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:26.192 [2024-10-08 10:46:46.684140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:26.192 [2024-10-08 10:46:46.684150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:26.192 [2024-10-08 10:46:46.684157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:26.192 [2024-10-08 10:46:46.684167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:26.192 [2024-10-08 10:46:46.684175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:26.192 [2024-10-08 10:46:46.684184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:26.192 [2024-10-08 10:46:46.684191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:26.192 [2024-10-08 10:46:46.684201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:26.192 [2024-10-08 10:46:46.684217] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:26.192 [2024-10-08 10:46:46.684228] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4b60e38a-37ec-40ca-bf08-3335c18144d1 00:15:26.192 [2024-10-08 10:46:46.684239] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:26.192 [2024-10-08 10:46:46.684248] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:26.192 [2024-10-08 10:46:46.684256] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:26.192 [2024-10-08 10:46:46.684269] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:26.192 [2024-10-08 10:46:46.684277] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:26.192 [2024-10-08 10:46:46.684287] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:26.192 [2024-10-08 10:46:46.684294] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:26.192 [2024-10-08 10:46:46.684303] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:26.192 [2024-10-08 10:46:46.684310] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:26.192 [2024-10-08 10:46:46.684320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.192 [2024-10-08 10:46:46.684329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:26.192 [2024-10-08 10:46:46.684339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.063 ms 00:15:26.192 [2024-10-08 10:46:46.684350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.192 [2024-10-08 10:46:46.686846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.192 [2024-10-08 10:46:46.686877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:26.192 [2024-10-08 10:46:46.686889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.471 ms 00:15:26.192 [2024-10-08 10:46:46.686897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.192 [2024-10-08 10:46:46.687011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:26.192 [2024-10-08 10:46:46.687020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:26.192 [2024-10-08 10:46:46.687034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:15:26.192 [2024-10-08 10:46:46.687043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.192 [2024-10-08 10:46:46.694178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:26.192 [2024-10-08 10:46:46.694219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:26.192 [2024-10-08 10:46:46.694232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:26.192 [2024-10-08 10:46:46.694244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.192 [2024-10-08 10:46:46.694316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:26.192 [2024-10-08 10:46:46.694324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:26.192 [2024-10-08 10:46:46.694335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:26.192 [2024-10-08 10:46:46.694343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.192 [2024-10-08 10:46:46.694420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:26.192 [2024-10-08 10:46:46.694430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:26.192 [2024-10-08 10:46:46.694445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:26.192 [2024-10-08 10:46:46.694453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.192 [2024-10-08 10:46:46.694474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:26.192 [2024-10-08 10:46:46.694481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:26.192 [2024-10-08 10:46:46.694494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:26.192 [2024-10-08 10:46:46.694502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.192 [2024-10-08 10:46:46.707627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:26.192 [2024-10-08 10:46:46.707673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:26.192 [2024-10-08 10:46:46.707686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:26.192 [2024-10-08 10:46:46.707694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.192 [2024-10-08 10:46:46.718232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:26.192 [2024-10-08 10:46:46.718274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:26.192 [2024-10-08 10:46:46.718287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:26.192 [2024-10-08 10:46:46.718296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.192 [2024-10-08 10:46:46.718372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:26.192 [2024-10-08 10:46:46.718382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:26.192 [2024-10-08 10:46:46.718393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:26.192 [2024-10-08 10:46:46.718401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.192 [2024-10-08 10:46:46.718447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:26.192 [2024-10-08 10:46:46.718457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:26.192 [2024-10-08 10:46:46.718471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:26.192 [2024-10-08 10:46:46.718478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.192 [2024-10-08 10:46:46.718550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:26.192 [2024-10-08 10:46:46.718563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:26.192 [2024-10-08 10:46:46.718573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:26.192 [2024-10-08 10:46:46.718581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.192 [2024-10-08 10:46:46.718618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:26.192 [2024-10-08 10:46:46.718627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:26.192 [2024-10-08 10:46:46.718637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:26.192 [2024-10-08 10:46:46.718645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.192 [2024-10-08 10:46:46.718686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:26.192 [2024-10-08 10:46:46.718698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:26.192 [2024-10-08 10:46:46.718708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:26.192 [2024-10-08 10:46:46.718715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.192 [2024-10-08 10:46:46.718770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:26.192 [2024-10-08 10:46:46.718780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:26.192 [2024-10-08 10:46:46.718809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:26.192 [2024-10-08 10:46:46.718818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:26.192 [2024-10-08 10:46:46.718964] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 287.759 ms, result 0 00:15:26.192 true 00:15:26.192 10:46:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 86077 00:15:26.192 10:46:46 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 86077 ']' 00:15:26.192 10:46:46 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 86077 00:15:26.192 10:46:46 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:15:26.193 10:46:46 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:26.193 10:46:46 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86077 00:15:26.453 killing process with pid 86077 00:15:26.453 Received shutdown signal, test time was about 4.000000 seconds 00:15:26.454 00:15:26.454 Latency(us) 00:15:26.454 [2024-10-08T10:46:47.031Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.454 [2024-10-08T10:46:47.031Z] =================================================================================================================== 00:15:26.454 [2024-10-08T10:46:47.031Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:26.454 10:46:46 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:26.454 10:46:46 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:26.454 10:46:46 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86077' 00:15:26.454 10:46:46 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 86077 00:15:26.454 10:46:46 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 86077 00:15:26.454 10:46:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:26.454 10:46:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:15:26.454 Remove shared memory files 00:15:26.454 10:46:47 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:15:26.454 10:46:47 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:15:26.454 10:46:47 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:15:26.454 10:46:47 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:15:26.716 10:46:47 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:15:26.716 10:46:47 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:15:26.716 00:15:26.716 real 0m21.088s 00:15:26.716 user 0m23.720s 00:15:26.716 sys 0m0.855s 00:15:26.716 10:46:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:26.716 ************************************ 00:15:26.716 END TEST ftl_bdevperf 00:15:26.716 ************************************ 00:15:26.716 10:46:47 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:26.716 10:46:47 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:15:26.716 10:46:47 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:26.716 10:46:47 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:26.716 10:46:47 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:26.716 ************************************ 00:15:26.716 START TEST ftl_trim 00:15:26.716 ************************************ 00:15:26.716 10:46:47 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:15:26.716 * Looking for test storage... 00:15:26.716 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:26.717 10:46:47 ftl.ftl_trim -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:26.717 10:46:47 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lcov --version 00:15:26.717 10:46:47 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:26.717 10:46:47 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:26.717 10:46:47 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:26.717 10:46:47 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:26.717 10:46:47 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:26.717 10:46:47 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:15:26.717 10:46:47 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:15:26.717 10:46:47 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:15:26.717 10:46:47 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:15:26.717 10:46:47 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:15:26.717 10:46:47 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:15:26.717 10:46:47 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:15:26.717 10:46:47 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:26.717 10:46:47 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:15:26.717 10:46:47 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:15:26.717 10:46:47 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:26.717 10:46:47 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:26.717 10:46:47 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:15:26.717 10:46:47 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:15:26.717 10:46:47 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:26.717 10:46:47 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:15:26.717 10:46:47 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:15:26.717 10:46:47 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:15:26.717 10:46:47 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:15:26.717 10:46:47 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:26.717 10:46:47 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:15:26.717 10:46:47 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:15:26.717 10:46:47 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:26.717 10:46:47 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:26.717 10:46:47 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:15:26.717 10:46:47 ftl.ftl_trim -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:26.717 10:46:47 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:26.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.717 --rc genhtml_branch_coverage=1 00:15:26.717 --rc genhtml_function_coverage=1 00:15:26.717 --rc genhtml_legend=1 00:15:26.717 --rc geninfo_all_blocks=1 00:15:26.717 --rc geninfo_unexecuted_blocks=1 00:15:26.717 00:15:26.717 ' 00:15:26.717 10:46:47 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:26.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.717 --rc genhtml_branch_coverage=1 00:15:26.717 --rc genhtml_function_coverage=1 00:15:26.717 --rc genhtml_legend=1 00:15:26.717 --rc geninfo_all_blocks=1 00:15:26.717 --rc geninfo_unexecuted_blocks=1 00:15:26.717 00:15:26.717 ' 00:15:26.717 10:46:47 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:26.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.717 --rc genhtml_branch_coverage=1 00:15:26.717 --rc genhtml_function_coverage=1 00:15:26.717 --rc genhtml_legend=1 00:15:26.717 --rc geninfo_all_blocks=1 00:15:26.717 --rc geninfo_unexecuted_blocks=1 00:15:26.717 00:15:26.717 ' 00:15:26.717 10:46:47 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:26.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.717 --rc genhtml_branch_coverage=1 00:15:26.717 --rc genhtml_function_coverage=1 00:15:26.717 --rc genhtml_legend=1 00:15:26.717 --rc geninfo_all_blocks=1 00:15:26.717 --rc geninfo_unexecuted_blocks=1 00:15:26.717 00:15:26.717 ' 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=86412 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 86412 00:15:26.717 10:46:47 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 86412 ']' 00:15:26.717 10:46:47 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:15:26.717 10:46:47 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.717 10:46:47 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:26.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.717 10:46:47 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.717 10:46:47 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:26.717 10:46:47 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:15:26.979 [2024-10-08 10:46:47.349300] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:15:26.979 [2024-10-08 10:46:47.349450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86412 ] 00:15:26.979 [2024-10-08 10:46:47.483297] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:26.979 [2024-10-08 10:46:47.502755] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:27.241 [2024-10-08 10:46:47.561905] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:27.241 [2024-10-08 10:46:47.562355] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:15:27.241 [2024-10-08 10:46:47.562467] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.813 10:46:48 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:27.813 10:46:48 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:15:27.813 10:46:48 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:27.814 10:46:48 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:15:27.814 10:46:48 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:27.814 10:46:48 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:15:27.814 10:46:48 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:15:27.814 10:46:48 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:28.075 10:46:48 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:28.075 10:46:48 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:15:28.075 10:46:48 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:28.075 10:46:48 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:15:28.075 10:46:48 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:28.075 10:46:48 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:15:28.075 10:46:48 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:15:28.075 10:46:48 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:28.337 10:46:48 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:28.337 { 00:15:28.337 "name": "nvme0n1", 00:15:28.337 "aliases": [ 00:15:28.337 "ab130527-b16c-4682-bd30-0cd2ed694d74" 00:15:28.337 ], 00:15:28.337 "product_name": "NVMe disk", 00:15:28.337 "block_size": 4096, 00:15:28.338 "num_blocks": 1310720, 00:15:28.338 "uuid": "ab130527-b16c-4682-bd30-0cd2ed694d74", 00:15:28.338 "numa_id": -1, 00:15:28.338 "assigned_rate_limits": { 00:15:28.338 "rw_ios_per_sec": 0, 00:15:28.338 "rw_mbytes_per_sec": 0, 00:15:28.338 "r_mbytes_per_sec": 0, 00:15:28.338 "w_mbytes_per_sec": 0 00:15:28.338 }, 00:15:28.338 "claimed": true, 00:15:28.338 "claim_type": "read_many_write_one", 00:15:28.338 "zoned": false, 00:15:28.338 "supported_io_types": { 00:15:28.338 "read": true, 00:15:28.338 "write": true, 00:15:28.338 "unmap": true, 00:15:28.338 "flush": true, 00:15:28.338 "reset": true, 00:15:28.338 "nvme_admin": true, 00:15:28.338 "nvme_io": true, 00:15:28.338 "nvme_io_md": false, 00:15:28.338 "write_zeroes": true, 00:15:28.338 "zcopy": false, 00:15:28.338 "get_zone_info": false, 00:15:28.338 "zone_management": false, 00:15:28.338 "zone_append": false, 00:15:28.338 "compare": true, 00:15:28.338 "compare_and_write": false, 00:15:28.338 "abort": true, 00:15:28.338 "seek_hole": false, 00:15:28.338 "seek_data": false, 00:15:28.338 "copy": true, 00:15:28.338 "nvme_iov_md": false 00:15:28.338 }, 00:15:28.338 "driver_specific": { 00:15:28.338 "nvme": [ 00:15:28.338 { 00:15:28.338 "pci_address": "0000:00:11.0", 00:15:28.338 "trid": { 00:15:28.338 "trtype": "PCIe", 00:15:28.338 "traddr": "0000:00:11.0" 00:15:28.338 }, 00:15:28.338 "ctrlr_data": { 00:15:28.338 "cntlid": 0, 00:15:28.338 "vendor_id": "0x1b36", 00:15:28.338 "model_number": "QEMU NVMe Ctrl", 00:15:28.338 "serial_number": "12341", 00:15:28.338 "firmware_revision": "8.0.0", 00:15:28.338 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:28.338 "oacs": { 00:15:28.338 "security": 0, 00:15:28.338 "format": 1, 00:15:28.338 "firmware": 0, 00:15:28.338 "ns_manage": 1 00:15:28.338 }, 00:15:28.338 "multi_ctrlr": false, 00:15:28.338 "ana_reporting": false 00:15:28.338 }, 00:15:28.338 "vs": { 00:15:28.338 "nvme_version": "1.4" 00:15:28.338 }, 00:15:28.338 "ns_data": { 00:15:28.338 "id": 1, 00:15:28.338 "can_share": false 00:15:28.338 } 00:15:28.338 } 00:15:28.338 ], 00:15:28.338 "mp_policy": "active_passive" 00:15:28.338 } 00:15:28.338 } 00:15:28.338 ]' 00:15:28.338 10:46:48 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:28.338 10:46:48 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:15:28.338 10:46:48 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:28.338 10:46:48 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:15:28.338 10:46:48 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:15:28.338 10:46:48 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:15:28.338 10:46:48 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:15:28.338 10:46:48 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:28.338 10:46:48 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:15:28.338 10:46:48 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:28.338 10:46:48 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:28.600 10:46:48 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=3f1a8d52-6327-4200-aa9f-029210b9d1aa 00:15:28.600 10:46:48 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:15:28.600 10:46:48 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3f1a8d52-6327-4200-aa9f-029210b9d1aa 00:15:28.862 10:46:49 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:29.124 10:46:49 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=6c6dbcfc-4039-4795-a142-8a99520ed99c 00:15:29.124 10:46:49 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 6c6dbcfc-4039-4795-a142-8a99520ed99c 00:15:29.386 10:46:49 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=22e9260f-6e1b-4a4d-962b-bc0adfab5c9e 00:15:29.386 10:46:49 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 22e9260f-6e1b-4a4d-962b-bc0adfab5c9e 00:15:29.386 10:46:49 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:15:29.386 10:46:49 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:29.386 10:46:49 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=22e9260f-6e1b-4a4d-962b-bc0adfab5c9e 00:15:29.386 10:46:49 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:15:29.386 10:46:49 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 22e9260f-6e1b-4a4d-962b-bc0adfab5c9e 00:15:29.386 10:46:49 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=22e9260f-6e1b-4a4d-962b-bc0adfab5c9e 00:15:29.386 10:46:49 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:29.386 10:46:49 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:15:29.386 10:46:49 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:15:29.386 10:46:49 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 22e9260f-6e1b-4a4d-962b-bc0adfab5c9e 00:15:29.386 10:46:49 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:29.386 { 00:15:29.386 "name": "22e9260f-6e1b-4a4d-962b-bc0adfab5c9e", 00:15:29.386 "aliases": [ 00:15:29.386 "lvs/nvme0n1p0" 00:15:29.386 ], 00:15:29.386 "product_name": "Logical Volume", 00:15:29.387 "block_size": 4096, 00:15:29.387 "num_blocks": 26476544, 00:15:29.387 "uuid": "22e9260f-6e1b-4a4d-962b-bc0adfab5c9e", 00:15:29.387 "assigned_rate_limits": { 00:15:29.387 "rw_ios_per_sec": 0, 00:15:29.387 "rw_mbytes_per_sec": 0, 00:15:29.387 "r_mbytes_per_sec": 0, 00:15:29.387 "w_mbytes_per_sec": 0 00:15:29.387 }, 00:15:29.387 "claimed": false, 00:15:29.387 "zoned": false, 00:15:29.387 "supported_io_types": { 00:15:29.387 "read": true, 00:15:29.387 "write": true, 00:15:29.387 "unmap": true, 00:15:29.387 "flush": false, 00:15:29.387 "reset": true, 00:15:29.387 "nvme_admin": false, 00:15:29.387 "nvme_io": false, 00:15:29.387 "nvme_io_md": false, 00:15:29.387 "write_zeroes": true, 00:15:29.387 "zcopy": false, 00:15:29.387 "get_zone_info": false, 00:15:29.387 "zone_management": false, 00:15:29.387 "zone_append": false, 00:15:29.387 "compare": false, 00:15:29.387 "compare_and_write": false, 00:15:29.387 "abort": false, 00:15:29.387 "seek_hole": true, 00:15:29.387 "seek_data": true, 00:15:29.387 "copy": false, 00:15:29.387 "nvme_iov_md": false 00:15:29.387 }, 00:15:29.387 "driver_specific": { 00:15:29.387 "lvol": { 00:15:29.387 "lvol_store_uuid": "6c6dbcfc-4039-4795-a142-8a99520ed99c", 00:15:29.387 "base_bdev": "nvme0n1", 00:15:29.387 "thin_provision": true, 00:15:29.387 "num_allocated_clusters": 0, 00:15:29.387 "snapshot": false, 00:15:29.387 "clone": false, 00:15:29.387 "esnap_clone": false 00:15:29.387 } 00:15:29.387 } 00:15:29.387 } 00:15:29.387 ]' 00:15:29.387 10:46:49 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:29.387 10:46:49 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:15:29.387 10:46:49 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:29.649 10:46:49 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:29.649 10:46:49 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:29.649 10:46:49 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:15:29.649 10:46:49 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:15:29.649 10:46:49 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:15:29.649 10:46:49 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:29.649 10:46:50 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:29.649 10:46:50 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:29.649 10:46:50 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 22e9260f-6e1b-4a4d-962b-bc0adfab5c9e 00:15:29.649 10:46:50 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=22e9260f-6e1b-4a4d-962b-bc0adfab5c9e 00:15:29.649 10:46:50 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:29.649 10:46:50 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:15:29.649 10:46:50 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:15:29.649 10:46:50 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 22e9260f-6e1b-4a4d-962b-bc0adfab5c9e 00:15:29.911 10:46:50 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:29.911 { 00:15:29.911 "name": "22e9260f-6e1b-4a4d-962b-bc0adfab5c9e", 00:15:29.911 "aliases": [ 00:15:29.911 "lvs/nvme0n1p0" 00:15:29.911 ], 00:15:29.911 "product_name": "Logical Volume", 00:15:29.911 "block_size": 4096, 00:15:29.911 "num_blocks": 26476544, 00:15:29.911 "uuid": "22e9260f-6e1b-4a4d-962b-bc0adfab5c9e", 00:15:29.911 "assigned_rate_limits": { 00:15:29.911 "rw_ios_per_sec": 0, 00:15:29.911 "rw_mbytes_per_sec": 0, 00:15:29.911 "r_mbytes_per_sec": 0, 00:15:29.911 "w_mbytes_per_sec": 0 00:15:29.911 }, 00:15:29.911 "claimed": false, 00:15:29.912 "zoned": false, 00:15:29.912 "supported_io_types": { 00:15:29.912 "read": true, 00:15:29.912 "write": true, 00:15:29.912 "unmap": true, 00:15:29.912 "flush": false, 00:15:29.912 "reset": true, 00:15:29.912 "nvme_admin": false, 00:15:29.912 "nvme_io": false, 00:15:29.912 "nvme_io_md": false, 00:15:29.912 "write_zeroes": true, 00:15:29.912 "zcopy": false, 00:15:29.912 "get_zone_info": false, 00:15:29.912 "zone_management": false, 00:15:29.912 "zone_append": false, 00:15:29.912 "compare": false, 00:15:29.912 "compare_and_write": false, 00:15:29.912 "abort": false, 00:15:29.912 "seek_hole": true, 00:15:29.912 "seek_data": true, 00:15:29.912 "copy": false, 00:15:29.912 "nvme_iov_md": false 00:15:29.912 }, 00:15:29.912 "driver_specific": { 00:15:29.912 "lvol": { 00:15:29.912 "lvol_store_uuid": "6c6dbcfc-4039-4795-a142-8a99520ed99c", 00:15:29.912 "base_bdev": "nvme0n1", 00:15:29.912 "thin_provision": true, 00:15:29.912 "num_allocated_clusters": 0, 00:15:29.912 "snapshot": false, 00:15:29.912 "clone": false, 00:15:29.912 "esnap_clone": false 00:15:29.912 } 00:15:29.912 } 00:15:29.912 } 00:15:29.912 ]' 00:15:29.912 10:46:50 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:29.912 10:46:50 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:15:29.912 10:46:50 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:29.912 10:46:50 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:29.912 10:46:50 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:29.912 10:46:50 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:15:29.912 10:46:50 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:15:29.912 10:46:50 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:30.172 10:46:50 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:15:30.172 10:46:50 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:15:30.172 10:46:50 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 22e9260f-6e1b-4a4d-962b-bc0adfab5c9e 00:15:30.172 10:46:50 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=22e9260f-6e1b-4a4d-962b-bc0adfab5c9e 00:15:30.172 10:46:50 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:30.172 10:46:50 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:15:30.172 10:46:50 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:15:30.172 10:46:50 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 22e9260f-6e1b-4a4d-962b-bc0adfab5c9e 00:15:30.436 10:46:50 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:30.436 { 00:15:30.436 "name": "22e9260f-6e1b-4a4d-962b-bc0adfab5c9e", 00:15:30.436 "aliases": [ 00:15:30.436 "lvs/nvme0n1p0" 00:15:30.436 ], 00:15:30.436 "product_name": "Logical Volume", 00:15:30.436 "block_size": 4096, 00:15:30.436 "num_blocks": 26476544, 00:15:30.436 "uuid": "22e9260f-6e1b-4a4d-962b-bc0adfab5c9e", 00:15:30.436 "assigned_rate_limits": { 00:15:30.436 "rw_ios_per_sec": 0, 00:15:30.436 "rw_mbytes_per_sec": 0, 00:15:30.436 "r_mbytes_per_sec": 0, 00:15:30.436 "w_mbytes_per_sec": 0 00:15:30.436 }, 00:15:30.436 "claimed": false, 00:15:30.436 "zoned": false, 00:15:30.436 "supported_io_types": { 00:15:30.436 "read": true, 00:15:30.436 "write": true, 00:15:30.436 "unmap": true, 00:15:30.436 "flush": false, 00:15:30.436 "reset": true, 00:15:30.436 "nvme_admin": false, 00:15:30.436 "nvme_io": false, 00:15:30.436 "nvme_io_md": false, 00:15:30.436 "write_zeroes": true, 00:15:30.436 "zcopy": false, 00:15:30.436 "get_zone_info": false, 00:15:30.436 "zone_management": false, 00:15:30.436 "zone_append": false, 00:15:30.436 "compare": false, 00:15:30.436 "compare_and_write": false, 00:15:30.436 "abort": false, 00:15:30.436 "seek_hole": true, 00:15:30.436 "seek_data": true, 00:15:30.436 "copy": false, 00:15:30.436 "nvme_iov_md": false 00:15:30.436 }, 00:15:30.436 "driver_specific": { 00:15:30.436 "lvol": { 00:15:30.436 "lvol_store_uuid": "6c6dbcfc-4039-4795-a142-8a99520ed99c", 00:15:30.436 "base_bdev": "nvme0n1", 00:15:30.436 "thin_provision": true, 00:15:30.436 "num_allocated_clusters": 0, 00:15:30.436 "snapshot": false, 00:15:30.437 "clone": false, 00:15:30.437 "esnap_clone": false 00:15:30.437 } 00:15:30.437 } 00:15:30.437 } 00:15:30.437 ]' 00:15:30.437 10:46:50 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:30.437 10:46:50 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:15:30.437 10:46:50 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:30.437 10:46:50 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:30.437 10:46:50 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:30.437 10:46:50 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:15:30.437 10:46:50 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:15:30.437 10:46:50 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 22e9260f-6e1b-4a4d-962b-bc0adfab5c9e -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:15:30.698 [2024-10-08 10:46:51.146695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.698 [2024-10-08 10:46:51.146733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:30.698 [2024-10-08 10:46:51.146745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:30.698 [2024-10-08 10:46:51.146751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.698 [2024-10-08 10:46:51.148703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.698 [2024-10-08 10:46:51.148741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:30.698 [2024-10-08 10:46:51.148751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.928 ms 00:15:30.698 [2024-10-08 10:46:51.148757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.698 [2024-10-08 10:46:51.148915] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:30.698 [2024-10-08 10:46:51.149120] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:30.698 [2024-10-08 10:46:51.149138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.698 [2024-10-08 10:46:51.149145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:30.698 [2024-10-08 10:46:51.149153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:15:30.698 [2024-10-08 10:46:51.149159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.698 [2024-10-08 10:46:51.149449] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9e839222-06df-480c-bbd4-867c33d1e348 00:15:30.698 [2024-10-08 10:46:51.150474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.698 [2024-10-08 10:46:51.150580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:30.698 [2024-10-08 10:46:51.150593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:15:30.698 [2024-10-08 10:46:51.150601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.698 [2024-10-08 10:46:51.155755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.698 [2024-10-08 10:46:51.155782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:30.698 [2024-10-08 10:46:51.155790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.082 ms 00:15:30.698 [2024-10-08 10:46:51.155807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.698 [2024-10-08 10:46:51.155921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.698 [2024-10-08 10:46:51.155932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:30.698 [2024-10-08 10:46:51.155938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:15:30.698 [2024-10-08 10:46:51.155965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.698 [2024-10-08 10:46:51.156003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.698 [2024-10-08 10:46:51.156021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:30.698 [2024-10-08 10:46:51.156034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:15:30.698 [2024-10-08 10:46:51.156041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.698 [2024-10-08 10:46:51.156087] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:15:30.698 [2024-10-08 10:46:51.157389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.698 [2024-10-08 10:46:51.157479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:30.698 [2024-10-08 10:46:51.157492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.304 ms 00:15:30.698 [2024-10-08 10:46:51.157498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.698 [2024-10-08 10:46:51.157539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.699 [2024-10-08 10:46:51.157547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:30.699 [2024-10-08 10:46:51.157556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:30.699 [2024-10-08 10:46:51.157562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.699 [2024-10-08 10:46:51.157591] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:30.699 [2024-10-08 10:46:51.157703] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:30.699 [2024-10-08 10:46:51.157714] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:30.699 [2024-10-08 10:46:51.157722] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:30.699 [2024-10-08 10:46:51.157732] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:30.699 [2024-10-08 10:46:51.157738] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:30.699 [2024-10-08 10:46:51.157746] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:15:30.699 [2024-10-08 10:46:51.157751] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:30.699 [2024-10-08 10:46:51.157759] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:30.699 [2024-10-08 10:46:51.157765] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:30.699 [2024-10-08 10:46:51.157772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.699 [2024-10-08 10:46:51.157786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:30.699 [2024-10-08 10:46:51.157814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.182 ms 00:15:30.699 [2024-10-08 10:46:51.157821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.699 [2024-10-08 10:46:51.157899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.699 [2024-10-08 10:46:51.157905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:30.699 [2024-10-08 10:46:51.157913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:15:30.699 [2024-10-08 10:46:51.157918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.699 [2024-10-08 10:46:51.158019] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:30.699 [2024-10-08 10:46:51.158026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:30.699 [2024-10-08 10:46:51.158033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:30.699 [2024-10-08 10:46:51.158041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:30.699 [2024-10-08 10:46:51.158048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:30.699 [2024-10-08 10:46:51.158053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:30.699 [2024-10-08 10:46:51.158060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:15:30.699 [2024-10-08 10:46:51.158065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:30.699 [2024-10-08 10:46:51.158072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:15:30.699 [2024-10-08 10:46:51.158077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:30.699 [2024-10-08 10:46:51.158084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:30.699 [2024-10-08 10:46:51.158090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:15:30.699 [2024-10-08 10:46:51.158101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:30.699 [2024-10-08 10:46:51.158107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:30.699 [2024-10-08 10:46:51.158114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:15:30.699 [2024-10-08 10:46:51.158120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:30.699 [2024-10-08 10:46:51.158127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:30.699 [2024-10-08 10:46:51.158133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:15:30.699 [2024-10-08 10:46:51.158140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:30.699 [2024-10-08 10:46:51.158145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:30.699 [2024-10-08 10:46:51.158152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:15:30.699 [2024-10-08 10:46:51.158158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:30.699 [2024-10-08 10:46:51.158165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:30.699 [2024-10-08 10:46:51.158171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:15:30.699 [2024-10-08 10:46:51.158178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:30.699 [2024-10-08 10:46:51.158185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:30.699 [2024-10-08 10:46:51.158192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:15:30.699 [2024-10-08 10:46:51.158198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:30.699 [2024-10-08 10:46:51.158206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:30.699 [2024-10-08 10:46:51.158212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:15:30.699 [2024-10-08 10:46:51.158219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:30.699 [2024-10-08 10:46:51.158225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:30.699 [2024-10-08 10:46:51.158232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:15:30.699 [2024-10-08 10:46:51.158237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:30.699 [2024-10-08 10:46:51.158245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:30.699 [2024-10-08 10:46:51.158250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:15:30.699 [2024-10-08 10:46:51.158257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:30.699 [2024-10-08 10:46:51.158263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:30.699 [2024-10-08 10:46:51.158271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:15:30.699 [2024-10-08 10:46:51.158277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:30.699 [2024-10-08 10:46:51.158297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:30.699 [2024-10-08 10:46:51.158303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:15:30.699 [2024-10-08 10:46:51.158309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:30.699 [2024-10-08 10:46:51.158315] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:30.699 [2024-10-08 10:46:51.158324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:30.699 [2024-10-08 10:46:51.158330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:30.699 [2024-10-08 10:46:51.158338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:30.699 [2024-10-08 10:46:51.158345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:30.699 [2024-10-08 10:46:51.158352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:30.699 [2024-10-08 10:46:51.158358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:30.699 [2024-10-08 10:46:51.158366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:30.699 [2024-10-08 10:46:51.158371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:30.699 [2024-10-08 10:46:51.158378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:30.699 [2024-10-08 10:46:51.158386] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:30.699 [2024-10-08 10:46:51.158395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:30.699 [2024-10-08 10:46:51.158411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:15:30.699 [2024-10-08 10:46:51.158419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:15:30.699 [2024-10-08 10:46:51.158426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:15:30.699 [2024-10-08 10:46:51.158433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:15:30.699 [2024-10-08 10:46:51.158440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:15:30.699 [2024-10-08 10:46:51.158449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:15:30.699 [2024-10-08 10:46:51.158455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:15:30.699 [2024-10-08 10:46:51.158462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:15:30.699 [2024-10-08 10:46:51.158468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:15:30.699 [2024-10-08 10:46:51.158474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:15:30.699 [2024-10-08 10:46:51.158479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:15:30.699 [2024-10-08 10:46:51.158486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:15:30.699 [2024-10-08 10:46:51.158491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:15:30.699 [2024-10-08 10:46:51.158498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:15:30.699 [2024-10-08 10:46:51.158503] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:30.699 [2024-10-08 10:46:51.158510] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:30.699 [2024-10-08 10:46:51.158516] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:30.700 [2024-10-08 10:46:51.158523] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:30.700 [2024-10-08 10:46:51.158528] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:30.700 [2024-10-08 10:46:51.158535] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:30.700 [2024-10-08 10:46:51.158541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:30.700 [2024-10-08 10:46:51.158557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:30.700 [2024-10-08 10:46:51.158564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.574 ms 00:15:30.700 [2024-10-08 10:46:51.158571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:30.700 [2024-10-08 10:46:51.158642] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:30.700 [2024-10-08 10:46:51.158650] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:33.233 [2024-10-08 10:46:53.565425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:33.233 [2024-10-08 10:46:53.565483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:33.233 [2024-10-08 10:46:53.565498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2406.771 ms 00:15:33.233 [2024-10-08 10:46:53.565508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:33.233 [2024-10-08 10:46:53.581378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:33.233 [2024-10-08 10:46:53.581430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:33.233 [2024-10-08 10:46:53.581444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.770 ms 00:15:33.233 [2024-10-08 10:46:53.581456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:33.233 [2024-10-08 10:46:53.581600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:33.233 [2024-10-08 10:46:53.581614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:33.233 [2024-10-08 10:46:53.581624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:15:33.233 [2024-10-08 10:46:53.581633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:33.233 [2024-10-08 10:46:53.590340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:33.233 [2024-10-08 10:46:53.590377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:33.233 [2024-10-08 10:46:53.590387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.676 ms 00:15:33.233 [2024-10-08 10:46:53.590397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:33.233 [2024-10-08 10:46:53.590444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:33.233 [2024-10-08 10:46:53.590455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:33.233 [2024-10-08 10:46:53.590464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:15:33.233 [2024-10-08 10:46:53.590473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:33.233 [2024-10-08 10:46:53.590824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:33.233 [2024-10-08 10:46:53.590843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:33.233 [2024-10-08 10:46:53.590853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:15:33.233 [2024-10-08 10:46:53.590864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:33.233 [2024-10-08 10:46:53.590996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:33.233 [2024-10-08 10:46:53.591008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:33.233 [2024-10-08 10:46:53.591018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:15:33.233 [2024-10-08 10:46:53.591028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:33.233 [2024-10-08 10:46:53.596636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:33.233 [2024-10-08 10:46:53.596668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:33.233 [2024-10-08 10:46:53.596677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.572 ms 00:15:33.233 [2024-10-08 10:46:53.596686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:33.233 [2024-10-08 10:46:53.605007] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:33.233 [2024-10-08 10:46:53.619566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:33.233 [2024-10-08 10:46:53.619739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:33.233 [2024-10-08 10:46:53.619758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.787 ms 00:15:33.233 [2024-10-08 10:46:53.619766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:33.233 [2024-10-08 10:46:53.674424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:33.233 [2024-10-08 10:46:53.674516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:33.233 [2024-10-08 10:46:53.674535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.567 ms 00:15:33.233 [2024-10-08 10:46:53.674543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:33.233 [2024-10-08 10:46:53.674729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:33.233 [2024-10-08 10:46:53.674740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:33.233 [2024-10-08 10:46:53.674763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:15:33.233 [2024-10-08 10:46:53.674771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:33.233 [2024-10-08 10:46:53.677662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:33.233 [2024-10-08 10:46:53.677778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:33.233 [2024-10-08 10:46:53.677809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.849 ms 00:15:33.233 [2024-10-08 10:46:53.677817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:33.233 [2024-10-08 10:46:53.680366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:33.233 [2024-10-08 10:46:53.680397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:33.233 [2024-10-08 10:46:53.680408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.503 ms 00:15:33.233 [2024-10-08 10:46:53.680415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:33.233 [2024-10-08 10:46:53.680730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:33.233 [2024-10-08 10:46:53.680745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:33.233 [2024-10-08 10:46:53.680759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:15:33.233 [2024-10-08 10:46:53.680767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:33.233 [2024-10-08 10:46:53.705283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:33.233 [2024-10-08 10:46:53.705391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:33.233 [2024-10-08 10:46:53.705460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.371 ms 00:15:33.233 [2024-10-08 10:46:53.705515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:33.233 [2024-10-08 10:46:53.709221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:33.233 [2024-10-08 10:46:53.709327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:33.233 [2024-10-08 10:46:53.709389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.607 ms 00:15:33.233 [2024-10-08 10:46:53.709416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:33.233 [2024-10-08 10:46:53.712314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:33.233 [2024-10-08 10:46:53.712343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:33.233 [2024-10-08 10:46:53.712354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.792 ms 00:15:33.233 [2024-10-08 10:46:53.712373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:33.233 [2024-10-08 10:46:53.715890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:33.233 [2024-10-08 10:46:53.715921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:33.233 [2024-10-08 10:46:53.715935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.471 ms 00:15:33.234 [2024-10-08 10:46:53.715943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:33.234 [2024-10-08 10:46:53.716013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:33.234 [2024-10-08 10:46:53.716023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:33.234 [2024-10-08 10:46:53.716035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:33.234 [2024-10-08 10:46:53.716042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:33.234 [2024-10-08 10:46:53.716115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:33.234 [2024-10-08 10:46:53.716124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:33.234 [2024-10-08 10:46:53.716133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:15:33.234 [2024-10-08 10:46:53.716140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:33.234 [2024-10-08 10:46:53.716962] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:33.234 [2024-10-08 10:46:53.717974] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2569.976 ms, result 0 00:15:33.234 [2024-10-08 10:46:53.718588] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:33.234 { 00:15:33.234 "name": "ftl0", 00:15:33.234 "uuid": "9e839222-06df-480c-bbd4-867c33d1e348" 00:15:33.234 } 00:15:33.234 10:46:53 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:15:33.234 10:46:53 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:15:33.234 10:46:53 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:33.234 10:46:53 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:15:33.234 10:46:53 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:33.234 10:46:53 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:33.234 10:46:53 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:33.492 10:46:53 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:33.751 [ 00:15:33.751 { 00:15:33.751 "name": "ftl0", 00:15:33.751 "aliases": [ 00:15:33.751 "9e839222-06df-480c-bbd4-867c33d1e348" 00:15:33.751 ], 00:15:33.751 "product_name": "FTL disk", 00:15:33.751 "block_size": 4096, 00:15:33.751 "num_blocks": 23592960, 00:15:33.751 "uuid": "9e839222-06df-480c-bbd4-867c33d1e348", 00:15:33.751 "assigned_rate_limits": { 00:15:33.751 "rw_ios_per_sec": 0, 00:15:33.751 "rw_mbytes_per_sec": 0, 00:15:33.751 "r_mbytes_per_sec": 0, 00:15:33.751 "w_mbytes_per_sec": 0 00:15:33.751 }, 00:15:33.751 "claimed": false, 00:15:33.751 "zoned": false, 00:15:33.751 "supported_io_types": { 00:15:33.751 "read": true, 00:15:33.751 "write": true, 00:15:33.751 "unmap": true, 00:15:33.751 "flush": true, 00:15:33.751 "reset": false, 00:15:33.751 "nvme_admin": false, 00:15:33.751 "nvme_io": false, 00:15:33.751 "nvme_io_md": false, 00:15:33.751 "write_zeroes": true, 00:15:33.751 "zcopy": false, 00:15:33.751 "get_zone_info": false, 00:15:33.751 "zone_management": false, 00:15:33.751 "zone_append": false, 00:15:33.752 "compare": false, 00:15:33.752 "compare_and_write": false, 00:15:33.752 "abort": false, 00:15:33.752 "seek_hole": false, 00:15:33.752 "seek_data": false, 00:15:33.752 "copy": false, 00:15:33.752 "nvme_iov_md": false 00:15:33.752 }, 00:15:33.752 "driver_specific": { 00:15:33.752 "ftl": { 00:15:33.752 "base_bdev": "22e9260f-6e1b-4a4d-962b-bc0adfab5c9e", 00:15:33.752 "cache": "nvc0n1p0" 00:15:33.752 } 00:15:33.752 } 00:15:33.752 } 00:15:33.752 ] 00:15:33.752 10:46:54 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:15:33.752 10:46:54 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:15:33.752 10:46:54 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:34.010 10:46:54 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:15:34.010 10:46:54 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:15:34.010 10:46:54 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:15:34.010 { 00:15:34.010 "name": "ftl0", 00:15:34.010 "aliases": [ 00:15:34.010 "9e839222-06df-480c-bbd4-867c33d1e348" 00:15:34.010 ], 00:15:34.010 "product_name": "FTL disk", 00:15:34.010 "block_size": 4096, 00:15:34.010 "num_blocks": 23592960, 00:15:34.010 "uuid": "9e839222-06df-480c-bbd4-867c33d1e348", 00:15:34.010 "assigned_rate_limits": { 00:15:34.010 "rw_ios_per_sec": 0, 00:15:34.010 "rw_mbytes_per_sec": 0, 00:15:34.010 "r_mbytes_per_sec": 0, 00:15:34.010 "w_mbytes_per_sec": 0 00:15:34.010 }, 00:15:34.010 "claimed": false, 00:15:34.010 "zoned": false, 00:15:34.010 "supported_io_types": { 00:15:34.010 "read": true, 00:15:34.010 "write": true, 00:15:34.010 "unmap": true, 00:15:34.010 "flush": true, 00:15:34.010 "reset": false, 00:15:34.010 "nvme_admin": false, 00:15:34.010 "nvme_io": false, 00:15:34.010 "nvme_io_md": false, 00:15:34.010 "write_zeroes": true, 00:15:34.010 "zcopy": false, 00:15:34.010 "get_zone_info": false, 00:15:34.010 "zone_management": false, 00:15:34.010 "zone_append": false, 00:15:34.010 "compare": false, 00:15:34.010 "compare_and_write": false, 00:15:34.010 "abort": false, 00:15:34.011 "seek_hole": false, 00:15:34.011 "seek_data": false, 00:15:34.011 "copy": false, 00:15:34.011 "nvme_iov_md": false 00:15:34.011 }, 00:15:34.011 "driver_specific": { 00:15:34.011 "ftl": { 00:15:34.011 "base_bdev": "22e9260f-6e1b-4a4d-962b-bc0adfab5c9e", 00:15:34.011 "cache": "nvc0n1p0" 00:15:34.011 } 00:15:34.011 } 00:15:34.011 } 00:15:34.011 ]' 00:15:34.011 10:46:54 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:15:34.011 10:46:54 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:15:34.011 10:46:54 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:34.272 [2024-10-08 10:46:54.746973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.272 [2024-10-08 10:46:54.747092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:34.272 [2024-10-08 10:46:54.747107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:34.272 [2024-10-08 10:46:54.747117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.272 [2024-10-08 10:46:54.747153] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:15:34.272 [2024-10-08 10:46:54.747568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.272 [2024-10-08 10:46:54.747580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:34.272 [2024-10-08 10:46:54.747589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:15:34.272 [2024-10-08 10:46:54.747595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.272 [2024-10-08 10:46:54.748131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.272 [2024-10-08 10:46:54.748149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:34.272 [2024-10-08 10:46:54.748160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.507 ms 00:15:34.272 [2024-10-08 10:46:54.748169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.272 [2024-10-08 10:46:54.751123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.272 [2024-10-08 10:46:54.751140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:34.272 [2024-10-08 10:46:54.751149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.924 ms 00:15:34.272 [2024-10-08 10:46:54.751155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.272 [2024-10-08 10:46:54.756333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.272 [2024-10-08 10:46:54.756356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:34.272 [2024-10-08 10:46:54.756368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.137 ms 00:15:34.272 [2024-10-08 10:46:54.756376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.272 [2024-10-08 10:46:54.757720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.272 [2024-10-08 10:46:54.757748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:34.272 [2024-10-08 10:46:54.757757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.254 ms 00:15:34.272 [2024-10-08 10:46:54.757762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.272 [2024-10-08 10:46:54.761706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.272 [2024-10-08 10:46:54.761815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:34.272 [2024-10-08 10:46:54.761830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.886 ms 00:15:34.272 [2024-10-08 10:46:54.761837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.272 [2024-10-08 10:46:54.762008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.272 [2024-10-08 10:46:54.762015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:34.272 [2024-10-08 10:46:54.762025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:15:34.272 [2024-10-08 10:46:54.762030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.272 [2024-10-08 10:46:54.763732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.272 [2024-10-08 10:46:54.763761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:34.272 [2024-10-08 10:46:54.763772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.675 ms 00:15:34.272 [2024-10-08 10:46:54.763778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.272 [2024-10-08 10:46:54.765297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.272 [2024-10-08 10:46:54.765395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:34.272 [2024-10-08 10:46:54.765410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.466 ms 00:15:34.272 [2024-10-08 10:46:54.765417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.272 [2024-10-08 10:46:54.766601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.272 [2024-10-08 10:46:54.766624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:34.272 [2024-10-08 10:46:54.766632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.116 ms 00:15:34.272 [2024-10-08 10:46:54.766637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.272 [2024-10-08 10:46:54.767520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.272 [2024-10-08 10:46:54.767546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:34.272 [2024-10-08 10:46:54.767555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.794 ms 00:15:34.272 [2024-10-08 10:46:54.767560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.272 [2024-10-08 10:46:54.767591] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:34.272 [2024-10-08 10:46:54.767602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:34.272 [2024-10-08 10:46:54.767613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:34.272 [2024-10-08 10:46:54.767619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:34.272 [2024-10-08 10:46:54.767626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:34.272 [2024-10-08 10:46:54.767632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:34.272 [2024-10-08 10:46:54.767640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.767996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.768002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.768008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.768014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.768022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.768028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.768035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.768040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.768047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.768053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.768062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.768069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.768076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.768082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.768089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.768095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.768102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.768108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.768115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.768121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.768127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.768133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.768140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.768146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.768152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.768158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.768167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.768173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.768180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.768186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.768194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.768200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.768207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.768213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:34.273 [2024-10-08 10:46:54.768220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:34.274 [2024-10-08 10:46:54.768226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:34.274 [2024-10-08 10:46:54.768233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:34.274 [2024-10-08 10:46:54.768239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:34.274 [2024-10-08 10:46:54.768246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:34.274 [2024-10-08 10:46:54.768251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:34.274 [2024-10-08 10:46:54.768258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:34.274 [2024-10-08 10:46:54.768264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:34.274 [2024-10-08 10:46:54.768272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:34.274 [2024-10-08 10:46:54.768278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:34.274 [2024-10-08 10:46:54.768285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:34.274 [2024-10-08 10:46:54.768297] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:34.274 [2024-10-08 10:46:54.768305] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9e839222-06df-480c-bbd4-867c33d1e348 00:15:34.274 [2024-10-08 10:46:54.768311] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:34.274 [2024-10-08 10:46:54.768317] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:34.274 [2024-10-08 10:46:54.768322] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:34.274 [2024-10-08 10:46:54.768330] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:34.274 [2024-10-08 10:46:54.768335] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:34.274 [2024-10-08 10:46:54.768344] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:34.274 [2024-10-08 10:46:54.768350] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:34.274 [2024-10-08 10:46:54.768356] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:34.274 [2024-10-08 10:46:54.768360] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:34.274 [2024-10-08 10:46:54.768368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.274 [2024-10-08 10:46:54.768389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:34.274 [2024-10-08 10:46:54.768398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.777 ms 00:15:34.274 [2024-10-08 10:46:54.768435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.274 [2024-10-08 10:46:54.769883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.274 [2024-10-08 10:46:54.769897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:34.274 [2024-10-08 10:46:54.769906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.415 ms 00:15:34.274 [2024-10-08 10:46:54.769914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.274 [2024-10-08 10:46:54.770006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:34.274 [2024-10-08 10:46:54.770013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:34.274 [2024-10-08 10:46:54.770021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:15:34.274 [2024-10-08 10:46:54.770027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.274 [2024-10-08 10:46:54.775009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:34.274 [2024-10-08 10:46:54.775104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:34.274 [2024-10-08 10:46:54.775152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:34.274 [2024-10-08 10:46:54.775170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.274 [2024-10-08 10:46:54.775246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:34.274 [2024-10-08 10:46:54.775536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:34.274 [2024-10-08 10:46:54.775734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:34.274 [2024-10-08 10:46:54.775955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.274 [2024-10-08 10:46:54.776300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:34.274 [2024-10-08 10:46:54.776498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:34.274 [2024-10-08 10:46:54.776700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:34.274 [2024-10-08 10:46:54.776885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.274 [2024-10-08 10:46:54.777225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:34.274 [2024-10-08 10:46:54.777414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:34.274 [2024-10-08 10:46:54.777565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:34.274 [2024-10-08 10:46:54.777704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.274 [2024-10-08 10:46:54.792060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:34.274 [2024-10-08 10:46:54.792181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:34.274 [2024-10-08 10:46:54.792234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:34.274 [2024-10-08 10:46:54.792564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.274 [2024-10-08 10:46:54.800317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:34.274 [2024-10-08 10:46:54.800440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:34.274 [2024-10-08 10:46:54.800501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:34.274 [2024-10-08 10:46:54.800525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.274 [2024-10-08 10:46:54.800641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:34.274 [2024-10-08 10:46:54.800673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:34.274 [2024-10-08 10:46:54.800727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:34.274 [2024-10-08 10:46:54.800750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.274 [2024-10-08 10:46:54.800838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:34.274 [2024-10-08 10:46:54.800891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:34.274 [2024-10-08 10:46:54.800913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:34.274 [2024-10-08 10:46:54.800958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.274 [2024-10-08 10:46:54.801078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:34.274 [2024-10-08 10:46:54.801105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:34.274 [2024-10-08 10:46:54.801175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:34.274 [2024-10-08 10:46:54.801199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.274 [2024-10-08 10:46:54.801275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:34.274 [2024-10-08 10:46:54.801302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:34.274 [2024-10-08 10:46:54.801338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:34.274 [2024-10-08 10:46:54.801386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.274 [2024-10-08 10:46:54.801455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:34.274 [2024-10-08 10:46:54.801576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:34.274 [2024-10-08 10:46:54.801603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:34.274 [2024-10-08 10:46:54.801622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.274 [2024-10-08 10:46:54.801710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:34.274 [2024-10-08 10:46:54.801736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:34.274 [2024-10-08 10:46:54.801758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:34.274 [2024-10-08 10:46:54.801777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:34.274 [2024-10-08 10:46:54.802048] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 55.037 ms, result 0 00:15:34.274 true 00:15:34.274 10:46:54 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 86412 00:15:34.274 10:46:54 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 86412 ']' 00:15:34.274 10:46:54 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 86412 00:15:34.274 10:46:54 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:15:34.274 10:46:54 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:34.274 10:46:54 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86412 00:15:34.533 killing process with pid 86412 00:15:34.533 10:46:54 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:34.533 10:46:54 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:34.533 10:46:54 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86412' 00:15:34.533 10:46:54 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 86412 00:15:34.533 10:46:54 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 86412 00:15:39.805 10:46:59 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:15:40.066 65536+0 records in 00:15:40.066 65536+0 records out 00:15:40.066 268435456 bytes (268 MB, 256 MiB) copied, 1.1283 s, 238 MB/s 00:15:40.066 10:47:00 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:40.066 [2024-10-08 10:47:00.636507] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:15:40.066 [2024-10-08 10:47:00.636657] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86583 ] 00:15:40.327 [2024-10-08 10:47:00.767409] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:40.327 [2024-10-08 10:47:00.788859] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.327 [2024-10-08 10:47:00.823396] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.591 [2024-10-08 10:47:00.912391] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:40.591 [2024-10-08 10:47:00.912631] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:40.591 [2024-10-08 10:47:01.069039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.591 [2024-10-08 10:47:01.069081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:40.591 [2024-10-08 10:47:01.069093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:40.591 [2024-10-08 10:47:01.069106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.591 [2024-10-08 10:47:01.071332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.591 [2024-10-08 10:47:01.071469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:40.591 [2024-10-08 10:47:01.071494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.208 ms 00:15:40.591 [2024-10-08 10:47:01.071502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.591 [2024-10-08 10:47:01.071882] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:40.591 [2024-10-08 10:47:01.072165] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:40.591 [2024-10-08 10:47:01.072193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.591 [2024-10-08 10:47:01.072207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:40.591 [2024-10-08 10:47:01.072221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:15:40.591 [2024-10-08 10:47:01.072232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.591 [2024-10-08 10:47:01.073589] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:15:40.591 [2024-10-08 10:47:01.076306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.591 [2024-10-08 10:47:01.076344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:15:40.591 [2024-10-08 10:47:01.076363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.719 ms 00:15:40.591 [2024-10-08 10:47:01.076370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.591 [2024-10-08 10:47:01.076428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.591 [2024-10-08 10:47:01.076441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:15:40.591 [2024-10-08 10:47:01.076449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:15:40.591 [2024-10-08 10:47:01.076457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.591 [2024-10-08 10:47:01.081599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.591 [2024-10-08 10:47:01.081633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:40.591 [2024-10-08 10:47:01.081642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.105 ms 00:15:40.591 [2024-10-08 10:47:01.081649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.591 [2024-10-08 10:47:01.081739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.591 [2024-10-08 10:47:01.081748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:40.591 [2024-10-08 10:47:01.081757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:15:40.591 [2024-10-08 10:47:01.081764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.591 [2024-10-08 10:47:01.081789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.591 [2024-10-08 10:47:01.081819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:40.591 [2024-10-08 10:47:01.081827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:40.591 [2024-10-08 10:47:01.081834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.591 [2024-10-08 10:47:01.081854] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:15:40.591 [2024-10-08 10:47:01.083267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.591 [2024-10-08 10:47:01.083299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:40.591 [2024-10-08 10:47:01.083311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.418 ms 00:15:40.591 [2024-10-08 10:47:01.083318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.591 [2024-10-08 10:47:01.083359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.591 [2024-10-08 10:47:01.083373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:40.591 [2024-10-08 10:47:01.083380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:15:40.591 [2024-10-08 10:47:01.083387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.591 [2024-10-08 10:47:01.083405] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:15:40.591 [2024-10-08 10:47:01.083421] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:15:40.591 [2024-10-08 10:47:01.083455] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:15:40.591 [2024-10-08 10:47:01.083471] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:15:40.591 [2024-10-08 10:47:01.083578] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:40.591 [2024-10-08 10:47:01.083589] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:40.591 [2024-10-08 10:47:01.083599] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:40.591 [2024-10-08 10:47:01.083612] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:40.591 [2024-10-08 10:47:01.083620] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:40.591 [2024-10-08 10:47:01.083627] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:15:40.591 [2024-10-08 10:47:01.083635] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:40.591 [2024-10-08 10:47:01.083642] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:40.591 [2024-10-08 10:47:01.083649] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:40.591 [2024-10-08 10:47:01.083661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.591 [2024-10-08 10:47:01.083670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:40.591 [2024-10-08 10:47:01.083677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:15:40.591 [2024-10-08 10:47:01.083684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.591 [2024-10-08 10:47:01.083776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.591 [2024-10-08 10:47:01.083785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:40.591 [2024-10-08 10:47:01.083792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:15:40.591 [2024-10-08 10:47:01.083823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.591 [2024-10-08 10:47:01.083925] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:40.591 [2024-10-08 10:47:01.083936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:40.591 [2024-10-08 10:47:01.083945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:40.591 [2024-10-08 10:47:01.083956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:40.591 [2024-10-08 10:47:01.083965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:40.591 [2024-10-08 10:47:01.083973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:40.591 [2024-10-08 10:47:01.083988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:15:40.591 [2024-10-08 10:47:01.083998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:40.591 [2024-10-08 10:47:01.084006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:15:40.591 [2024-10-08 10:47:01.084014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:40.591 [2024-10-08 10:47:01.084022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:40.591 [2024-10-08 10:47:01.084030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:15:40.592 [2024-10-08 10:47:01.084037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:40.592 [2024-10-08 10:47:01.084045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:40.592 [2024-10-08 10:47:01.084053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:15:40.592 [2024-10-08 10:47:01.084060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:40.592 [2024-10-08 10:47:01.084068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:40.592 [2024-10-08 10:47:01.084076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:15:40.592 [2024-10-08 10:47:01.084083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:40.592 [2024-10-08 10:47:01.084091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:40.592 [2024-10-08 10:47:01.084098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:15:40.592 [2024-10-08 10:47:01.084106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:40.592 [2024-10-08 10:47:01.084114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:40.592 [2024-10-08 10:47:01.084125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:15:40.592 [2024-10-08 10:47:01.084134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:40.592 [2024-10-08 10:47:01.084142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:40.592 [2024-10-08 10:47:01.084149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:15:40.592 [2024-10-08 10:47:01.084157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:40.592 [2024-10-08 10:47:01.084165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:40.592 [2024-10-08 10:47:01.084173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:15:40.592 [2024-10-08 10:47:01.084180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:40.592 [2024-10-08 10:47:01.084187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:40.592 [2024-10-08 10:47:01.084195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:15:40.592 [2024-10-08 10:47:01.084202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:40.592 [2024-10-08 10:47:01.084210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:40.592 [2024-10-08 10:47:01.084217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:15:40.592 [2024-10-08 10:47:01.084224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:40.592 [2024-10-08 10:47:01.084232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:40.592 [2024-10-08 10:47:01.084239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:15:40.592 [2024-10-08 10:47:01.084249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:40.592 [2024-10-08 10:47:01.084256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:40.592 [2024-10-08 10:47:01.084264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:15:40.592 [2024-10-08 10:47:01.084272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:40.592 [2024-10-08 10:47:01.084279] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:40.592 [2024-10-08 10:47:01.084288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:40.592 [2024-10-08 10:47:01.084295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:40.592 [2024-10-08 10:47:01.084305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:40.592 [2024-10-08 10:47:01.084312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:40.592 [2024-10-08 10:47:01.084319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:40.592 [2024-10-08 10:47:01.084325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:40.592 [2024-10-08 10:47:01.084331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:40.592 [2024-10-08 10:47:01.084338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:40.592 [2024-10-08 10:47:01.084344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:40.592 [2024-10-08 10:47:01.084352] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:40.592 [2024-10-08 10:47:01.084360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:40.592 [2024-10-08 10:47:01.084370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:15:40.592 [2024-10-08 10:47:01.084379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:15:40.592 [2024-10-08 10:47:01.084386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:15:40.592 [2024-10-08 10:47:01.084393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:15:40.592 [2024-10-08 10:47:01.084400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:15:40.592 [2024-10-08 10:47:01.084407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:15:40.592 [2024-10-08 10:47:01.084414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:15:40.592 [2024-10-08 10:47:01.084421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:15:40.592 [2024-10-08 10:47:01.084428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:15:40.592 [2024-10-08 10:47:01.084435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:15:40.592 [2024-10-08 10:47:01.084442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:15:40.592 [2024-10-08 10:47:01.084449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:15:40.592 [2024-10-08 10:47:01.084456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:15:40.592 [2024-10-08 10:47:01.084464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:15:40.592 [2024-10-08 10:47:01.084470] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:40.592 [2024-10-08 10:47:01.084479] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:40.592 [2024-10-08 10:47:01.084489] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:40.592 [2024-10-08 10:47:01.084496] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:40.592 [2024-10-08 10:47:01.084503] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:40.592 [2024-10-08 10:47:01.084510] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:40.592 [2024-10-08 10:47:01.084517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.592 [2024-10-08 10:47:01.084527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:40.592 [2024-10-08 10:47:01.084534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.660 ms 00:15:40.592 [2024-10-08 10:47:01.084541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.592 [2024-10-08 10:47:01.102055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.592 [2024-10-08 10:47:01.102204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:40.592 [2024-10-08 10:47:01.102265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.465 ms 00:15:40.592 [2024-10-08 10:47:01.102289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.592 [2024-10-08 10:47:01.102434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.592 [2024-10-08 10:47:01.102512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:40.592 [2024-10-08 10:47:01.102540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:15:40.592 [2024-10-08 10:47:01.102559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.592 [2024-10-08 10:47:01.111373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.592 [2024-10-08 10:47:01.111507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:40.592 [2024-10-08 10:47:01.111562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.708 ms 00:15:40.592 [2024-10-08 10:47:01.111588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.592 [2024-10-08 10:47:01.111656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.592 [2024-10-08 10:47:01.111685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:40.592 [2024-10-08 10:47:01.111716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:40.592 [2024-10-08 10:47:01.111737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.592 [2024-10-08 10:47:01.112177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.592 [2024-10-08 10:47:01.112270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:40.592 [2024-10-08 10:47:01.112300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:15:40.592 [2024-10-08 10:47:01.112344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.592 [2024-10-08 10:47:01.112591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.592 [2024-10-08 10:47:01.112633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:40.592 [2024-10-08 10:47:01.112696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:15:40.592 [2024-10-08 10:47:01.112727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.592 [2024-10-08 10:47:01.118052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.592 [2024-10-08 10:47:01.118161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:40.592 [2024-10-08 10:47:01.118206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.286 ms 00:15:40.592 [2024-10-08 10:47:01.118231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.593 [2024-10-08 10:47:01.120978] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:15:40.593 [2024-10-08 10:47:01.121116] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:15:40.593 [2024-10-08 10:47:01.121181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.593 [2024-10-08 10:47:01.121201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:15:40.593 [2024-10-08 10:47:01.121220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.849 ms 00:15:40.593 [2024-10-08 10:47:01.121238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.593 [2024-10-08 10:47:01.136005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.593 [2024-10-08 10:47:01.136136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:15:40.593 [2024-10-08 10:47:01.136191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.718 ms 00:15:40.593 [2024-10-08 10:47:01.136214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.593 [2024-10-08 10:47:01.138688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.593 [2024-10-08 10:47:01.138835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:15:40.593 [2024-10-08 10:47:01.138891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.137 ms 00:15:40.593 [2024-10-08 10:47:01.138914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.593 [2024-10-08 10:47:01.140460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.593 [2024-10-08 10:47:01.140577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:15:40.593 [2024-10-08 10:47:01.140628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.435 ms 00:15:40.593 [2024-10-08 10:47:01.140649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.593 [2024-10-08 10:47:01.141271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.593 [2024-10-08 10:47:01.141344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:40.593 [2024-10-08 10:47:01.141707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:15:40.593 [2024-10-08 10:47:01.141759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.593 [2024-10-08 10:47:01.157990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.593 [2024-10-08 10:47:01.158153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:15:40.593 [2024-10-08 10:47:01.158210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.161 ms 00:15:40.593 [2024-10-08 10:47:01.158233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.855 [2024-10-08 10:47:01.165778] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:40.855 [2024-10-08 10:47:01.180716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.855 [2024-10-08 10:47:01.180855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:40.855 [2024-10-08 10:47:01.180915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.392 ms 00:15:40.855 [2024-10-08 10:47:01.180931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.855 [2024-10-08 10:47:01.181042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.855 [2024-10-08 10:47:01.181054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:15:40.855 [2024-10-08 10:47:01.181067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:15:40.855 [2024-10-08 10:47:01.181083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.855 [2024-10-08 10:47:01.181134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.855 [2024-10-08 10:47:01.181143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:40.855 [2024-10-08 10:47:01.181151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:15:40.855 [2024-10-08 10:47:01.181159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.855 [2024-10-08 10:47:01.181182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.855 [2024-10-08 10:47:01.181191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:40.855 [2024-10-08 10:47:01.181199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:40.855 [2024-10-08 10:47:01.181206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.855 [2024-10-08 10:47:01.181238] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:15:40.855 [2024-10-08 10:47:01.181247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.855 [2024-10-08 10:47:01.181257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:15:40.855 [2024-10-08 10:47:01.181264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:15:40.855 [2024-10-08 10:47:01.181272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.855 [2024-10-08 10:47:01.185521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.855 [2024-10-08 10:47:01.185558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:40.855 [2024-10-08 10:47:01.185569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.229 ms 00:15:40.855 [2024-10-08 10:47:01.185583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.855 [2024-10-08 10:47:01.185670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:40.855 [2024-10-08 10:47:01.185681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:40.855 [2024-10-08 10:47:01.185696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:15:40.855 [2024-10-08 10:47:01.185703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:40.855 [2024-10-08 10:47:01.186497] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:40.855 [2024-10-08 10:47:01.187555] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 117.186 ms, result 0 00:15:40.855 [2024-10-08 10:47:01.189286] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:40.855 [2024-10-08 10:47:01.200009] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:41.800  [2024-10-08T10:47:03.321Z] Copying: 17/256 [MB] (17 MBps) [2024-10-08T10:47:04.265Z] Copying: 40/256 [MB] (23 MBps) [2024-10-08T10:47:05.209Z] Copying: 63/256 [MB] (22 MBps) [2024-10-08T10:47:06.592Z] Copying: 80/256 [MB] (17 MBps) [2024-10-08T10:47:07.536Z] Copying: 108/256 [MB] (27 MBps) [2024-10-08T10:47:08.484Z] Copying: 120/256 [MB] (12 MBps) [2024-10-08T10:47:09.420Z] Copying: 131/256 [MB] (11 MBps) [2024-10-08T10:47:10.354Z] Copying: 159/256 [MB] (27 MBps) [2024-10-08T10:47:11.290Z] Copying: 204/256 [MB] (45 MBps) [2024-10-08T10:47:12.235Z] Copying: 236/256 [MB] (31 MBps) [2024-10-08T10:47:12.808Z] Copying: 251/256 [MB] (15 MBps) [2024-10-08T10:47:12.809Z] Copying: 256/256 [MB] (average 22 MBps)[2024-10-08 10:47:12.500595] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:52.232 [2024-10-08 10:47:12.502560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.232 [2024-10-08 10:47:12.502613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:52.232 [2024-10-08 10:47:12.502628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:52.232 [2024-10-08 10:47:12.502642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.232 [2024-10-08 10:47:12.502666] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:15:52.232 [2024-10-08 10:47:12.503382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.232 [2024-10-08 10:47:12.503417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:52.232 [2024-10-08 10:47:12.503429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.701 ms 00:15:52.232 [2024-10-08 10:47:12.503446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.232 [2024-10-08 10:47:12.506571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.232 [2024-10-08 10:47:12.506621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:52.232 [2024-10-08 10:47:12.506632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.096 ms 00:15:52.232 [2024-10-08 10:47:12.506640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.232 [2024-10-08 10:47:12.514423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.232 [2024-10-08 10:47:12.514611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:52.232 [2024-10-08 10:47:12.514631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.763 ms 00:15:52.232 [2024-10-08 10:47:12.514639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.232 [2024-10-08 10:47:12.521650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.232 [2024-10-08 10:47:12.521826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:52.232 [2024-10-08 10:47:12.521845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.923 ms 00:15:52.232 [2024-10-08 10:47:12.521853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.232 [2024-10-08 10:47:12.524339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.232 [2024-10-08 10:47:12.524385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:52.232 [2024-10-08 10:47:12.524395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.430 ms 00:15:52.232 [2024-10-08 10:47:12.524403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.232 [2024-10-08 10:47:12.529044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.232 [2024-10-08 10:47:12.529214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:52.232 [2024-10-08 10:47:12.529233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.594 ms 00:15:52.232 [2024-10-08 10:47:12.529249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.232 [2024-10-08 10:47:12.529406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.232 [2024-10-08 10:47:12.529428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:52.232 [2024-10-08 10:47:12.529437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:15:52.232 [2024-10-08 10:47:12.529444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.232 [2024-10-08 10:47:12.532658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.232 [2024-10-08 10:47:12.532852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:52.232 [2024-10-08 10:47:12.532869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.195 ms 00:15:52.232 [2024-10-08 10:47:12.532877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.232 [2024-10-08 10:47:12.535936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.232 [2024-10-08 10:47:12.536095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:52.232 [2024-10-08 10:47:12.536112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.016 ms 00:15:52.232 [2024-10-08 10:47:12.536119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.232 [2024-10-08 10:47:12.538469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.232 [2024-10-08 10:47:12.538519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:52.232 [2024-10-08 10:47:12.538529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.260 ms 00:15:52.232 [2024-10-08 10:47:12.538536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.232 [2024-10-08 10:47:12.540753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.232 [2024-10-08 10:47:12.540818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:52.232 [2024-10-08 10:47:12.540828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.143 ms 00:15:52.232 [2024-10-08 10:47:12.540835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.232 [2024-10-08 10:47:12.540877] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:52.232 [2024-10-08 10:47:12.540900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.540909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.540918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.540926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.540935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.540942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.540950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.540957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.540964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.540972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.540979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.540986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.540993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.541000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.541023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.541030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.541038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.541045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.541053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.541061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.541068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.541075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.541082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.541091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.541098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.541105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.541114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.541121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.541128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.541136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.541146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.541154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.541162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.541169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.541176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.541184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.541192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.541199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.541206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.541214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.541222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.541230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.541237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.541244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.541252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.541260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.541268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.541275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:52.232 [2024-10-08 10:47:12.541282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:52.233 [2024-10-08 10:47:12.541680] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:52.233 [2024-10-08 10:47:12.541688] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9e839222-06df-480c-bbd4-867c33d1e348 00:15:52.233 [2024-10-08 10:47:12.541696] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:52.233 [2024-10-08 10:47:12.541704] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:52.233 [2024-10-08 10:47:12.541712] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:52.233 [2024-10-08 10:47:12.541720] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:52.233 [2024-10-08 10:47:12.541727] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:52.233 [2024-10-08 10:47:12.541735] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:52.233 [2024-10-08 10:47:12.541750] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:52.233 [2024-10-08 10:47:12.541757] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:52.233 [2024-10-08 10:47:12.541763] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:52.233 [2024-10-08 10:47:12.541770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.233 [2024-10-08 10:47:12.541778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:52.233 [2024-10-08 10:47:12.541786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.895 ms 00:15:52.233 [2024-10-08 10:47:12.541810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.233 [2024-10-08 10:47:12.544137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.233 [2024-10-08 10:47:12.544169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:52.233 [2024-10-08 10:47:12.544179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.309 ms 00:15:52.233 [2024-10-08 10:47:12.544186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.233 [2024-10-08 10:47:12.544298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:52.233 [2024-10-08 10:47:12.544313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:52.233 [2024-10-08 10:47:12.544322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:15:52.233 [2024-10-08 10:47:12.544329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.233 [2024-10-08 10:47:12.552192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:52.233 [2024-10-08 10:47:12.552241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:52.233 [2024-10-08 10:47:12.552252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:52.233 [2024-10-08 10:47:12.552261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.233 [2024-10-08 10:47:12.552341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:52.233 [2024-10-08 10:47:12.552359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:52.233 [2024-10-08 10:47:12.552373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:52.233 [2024-10-08 10:47:12.552384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.233 [2024-10-08 10:47:12.552428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:52.233 [2024-10-08 10:47:12.552437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:52.233 [2024-10-08 10:47:12.552445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:52.233 [2024-10-08 10:47:12.552453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.233 [2024-10-08 10:47:12.552471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:52.233 [2024-10-08 10:47:12.552479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:52.233 [2024-10-08 10:47:12.552490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:52.233 [2024-10-08 10:47:12.552500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.233 [2024-10-08 10:47:12.567034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:52.233 [2024-10-08 10:47:12.567088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:52.233 [2024-10-08 10:47:12.567099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:52.233 [2024-10-08 10:47:12.567108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.233 [2024-10-08 10:47:12.578116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:52.233 [2024-10-08 10:47:12.578295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:52.233 [2024-10-08 10:47:12.578321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:52.233 [2024-10-08 10:47:12.578335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.233 [2024-10-08 10:47:12.578393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:52.233 [2024-10-08 10:47:12.578403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:52.233 [2024-10-08 10:47:12.578412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:52.233 [2024-10-08 10:47:12.578420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.233 [2024-10-08 10:47:12.578451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:52.233 [2024-10-08 10:47:12.578461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:52.233 [2024-10-08 10:47:12.578468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:52.233 [2024-10-08 10:47:12.578479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.234 [2024-10-08 10:47:12.578558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:52.234 [2024-10-08 10:47:12.578572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:52.234 [2024-10-08 10:47:12.578584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:52.234 [2024-10-08 10:47:12.578595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.234 [2024-10-08 10:47:12.578625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:52.234 [2024-10-08 10:47:12.578634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:52.234 [2024-10-08 10:47:12.578643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:52.234 [2024-10-08 10:47:12.578650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.234 [2024-10-08 10:47:12.578696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:52.234 [2024-10-08 10:47:12.578706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:52.234 [2024-10-08 10:47:12.578714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:52.234 [2024-10-08 10:47:12.578722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.234 [2024-10-08 10:47:12.578769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:52.234 [2024-10-08 10:47:12.578780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:52.234 [2024-10-08 10:47:12.578808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:52.234 [2024-10-08 10:47:12.578817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:52.234 [2024-10-08 10:47:12.578968] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 76.405 ms, result 0 00:15:52.495 00:15:52.495 00:15:52.495 10:47:12 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=86713 00:15:52.495 10:47:12 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 86713 00:15:52.495 10:47:12 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 86713 ']' 00:15:52.495 10:47:12 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:15:52.495 10:47:12 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.495 10:47:12 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:52.495 10:47:12 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.495 10:47:12 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:52.495 10:47:12 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:15:52.495 [2024-10-08 10:47:13.059007] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:15:52.495 [2024-10-08 10:47:13.059437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86713 ] 00:15:52.757 [2024-10-08 10:47:13.191406] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:52.757 [2024-10-08 10:47:13.213658] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.757 [2024-10-08 10:47:13.264317] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.700 10:47:13 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:53.700 10:47:13 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:15:53.700 10:47:13 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:15:53.700 [2024-10-08 10:47:14.136095] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:53.700 [2024-10-08 10:47:14.136375] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:53.961 [2024-10-08 10:47:14.313223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.961 [2024-10-08 10:47:14.313454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:53.961 [2024-10-08 10:47:14.313483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:53.961 [2024-10-08 10:47:14.313493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.961 [2024-10-08 10:47:14.316021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.961 [2024-10-08 10:47:14.316076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:53.961 [2024-10-08 10:47:14.316090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.497 ms 00:15:53.961 [2024-10-08 10:47:14.316098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.961 [2024-10-08 10:47:14.316210] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:53.961 [2024-10-08 10:47:14.316479] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:53.961 [2024-10-08 10:47:14.316496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.961 [2024-10-08 10:47:14.316506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:53.961 [2024-10-08 10:47:14.316518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:15:53.961 [2024-10-08 10:47:14.316527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.961 [2024-10-08 10:47:14.318345] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:15:53.961 [2024-10-08 10:47:14.322295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.961 [2024-10-08 10:47:14.322355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:15:53.961 [2024-10-08 10:47:14.322367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.958 ms 00:15:53.961 [2024-10-08 10:47:14.322378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.961 [2024-10-08 10:47:14.322480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.961 [2024-10-08 10:47:14.322502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:15:53.961 [2024-10-08 10:47:14.322515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:15:53.961 [2024-10-08 10:47:14.322526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.961 [2024-10-08 10:47:14.330726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.961 [2024-10-08 10:47:14.330775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:53.961 [2024-10-08 10:47:14.330786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.148 ms 00:15:53.961 [2024-10-08 10:47:14.330816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.961 [2024-10-08 10:47:14.330952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.961 [2024-10-08 10:47:14.330967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:53.961 [2024-10-08 10:47:14.330976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:15:53.961 [2024-10-08 10:47:14.330991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.961 [2024-10-08 10:47:14.331024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.962 [2024-10-08 10:47:14.331034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:53.962 [2024-10-08 10:47:14.331042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:15:53.962 [2024-10-08 10:47:14.331054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.962 [2024-10-08 10:47:14.331079] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:15:53.962 [2024-10-08 10:47:14.333155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.962 [2024-10-08 10:47:14.333329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:53.962 [2024-10-08 10:47:14.333352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.073 ms 00:15:53.962 [2024-10-08 10:47:14.333360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.962 [2024-10-08 10:47:14.333415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.962 [2024-10-08 10:47:14.333424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:53.962 [2024-10-08 10:47:14.333435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:15:53.962 [2024-10-08 10:47:14.333443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.962 [2024-10-08 10:47:14.333466] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:15:53.962 [2024-10-08 10:47:14.333487] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:15:53.962 [2024-10-08 10:47:14.333532] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:15:53.962 [2024-10-08 10:47:14.333550] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:15:53.962 [2024-10-08 10:47:14.333659] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:53.962 [2024-10-08 10:47:14.333671] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:53.962 [2024-10-08 10:47:14.333684] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:53.962 [2024-10-08 10:47:14.333696] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:53.962 [2024-10-08 10:47:14.333714] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:53.962 [2024-10-08 10:47:14.333723] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:15:53.962 [2024-10-08 10:47:14.333734] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:53.962 [2024-10-08 10:47:14.333742] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:53.962 [2024-10-08 10:47:14.333752] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:53.962 [2024-10-08 10:47:14.333762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.962 [2024-10-08 10:47:14.333772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:53.962 [2024-10-08 10:47:14.333780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:15:53.962 [2024-10-08 10:47:14.333790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.962 [2024-10-08 10:47:14.333902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.962 [2024-10-08 10:47:14.333915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:53.962 [2024-10-08 10:47:14.333925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:15:53.962 [2024-10-08 10:47:14.333935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.962 [2024-10-08 10:47:14.334041] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:53.962 [2024-10-08 10:47:14.334057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:53.962 [2024-10-08 10:47:14.334067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:53.962 [2024-10-08 10:47:14.334080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:53.962 [2024-10-08 10:47:14.334090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:53.962 [2024-10-08 10:47:14.334100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:53.962 [2024-10-08 10:47:14.334109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:15:53.962 [2024-10-08 10:47:14.334119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:53.962 [2024-10-08 10:47:14.334127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:15:53.962 [2024-10-08 10:47:14.334139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:53.962 [2024-10-08 10:47:14.334147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:53.962 [2024-10-08 10:47:14.334157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:15:53.962 [2024-10-08 10:47:14.334165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:53.962 [2024-10-08 10:47:14.334180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:53.962 [2024-10-08 10:47:14.334188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:15:53.962 [2024-10-08 10:47:14.334198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:53.962 [2024-10-08 10:47:14.334206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:53.962 [2024-10-08 10:47:14.334216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:15:53.962 [2024-10-08 10:47:14.334223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:53.962 [2024-10-08 10:47:14.334235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:53.962 [2024-10-08 10:47:14.334244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:15:53.962 [2024-10-08 10:47:14.334255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:53.962 [2024-10-08 10:47:14.334263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:53.962 [2024-10-08 10:47:14.334274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:15:53.962 [2024-10-08 10:47:14.334281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:53.962 [2024-10-08 10:47:14.334290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:53.962 [2024-10-08 10:47:14.334297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:15:53.962 [2024-10-08 10:47:14.334305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:53.962 [2024-10-08 10:47:14.334312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:53.962 [2024-10-08 10:47:14.334321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:15:53.962 [2024-10-08 10:47:14.334328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:53.962 [2024-10-08 10:47:14.334336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:53.962 [2024-10-08 10:47:14.334343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:15:53.962 [2024-10-08 10:47:14.334351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:53.962 [2024-10-08 10:47:14.334358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:53.962 [2024-10-08 10:47:14.334370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:15:53.962 [2024-10-08 10:47:14.334377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:53.962 [2024-10-08 10:47:14.334385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:53.962 [2024-10-08 10:47:14.334393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:15:53.962 [2024-10-08 10:47:14.334402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:53.962 [2024-10-08 10:47:14.334409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:53.962 [2024-10-08 10:47:14.334417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:15:53.962 [2024-10-08 10:47:14.334426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:53.962 [2024-10-08 10:47:14.334435] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:53.962 [2024-10-08 10:47:14.334445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:53.962 [2024-10-08 10:47:14.334454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:53.962 [2024-10-08 10:47:14.334462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:53.962 [2024-10-08 10:47:14.334472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:53.962 [2024-10-08 10:47:14.334479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:53.962 [2024-10-08 10:47:14.334487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:53.962 [2024-10-08 10:47:14.334494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:53.962 [2024-10-08 10:47:14.334504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:53.962 [2024-10-08 10:47:14.334512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:53.962 [2024-10-08 10:47:14.334524] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:53.962 [2024-10-08 10:47:14.334533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:53.962 [2024-10-08 10:47:14.334544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:15:53.962 [2024-10-08 10:47:14.334552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:15:53.962 [2024-10-08 10:47:14.334561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:15:53.962 [2024-10-08 10:47:14.334569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:15:53.962 [2024-10-08 10:47:14.334579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:15:53.962 [2024-10-08 10:47:14.334587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:15:53.962 [2024-10-08 10:47:14.334599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:15:53.962 [2024-10-08 10:47:14.334606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:15:53.962 [2024-10-08 10:47:14.334616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:15:53.962 [2024-10-08 10:47:14.334624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:15:53.962 [2024-10-08 10:47:14.334633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:15:53.962 [2024-10-08 10:47:14.334640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:15:53.962 [2024-10-08 10:47:14.334651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:15:53.962 [2024-10-08 10:47:14.334659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:15:53.962 [2024-10-08 10:47:14.334669] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:53.962 [2024-10-08 10:47:14.334678] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:53.962 [2024-10-08 10:47:14.334690] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:53.963 [2024-10-08 10:47:14.334698] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:53.963 [2024-10-08 10:47:14.334708] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:53.963 [2024-10-08 10:47:14.334716] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:53.963 [2024-10-08 10:47:14.334726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.963 [2024-10-08 10:47:14.334734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:53.963 [2024-10-08 10:47:14.334744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:15:53.963 [2024-10-08 10:47:14.334755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.963 [2024-10-08 10:47:14.348608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.963 [2024-10-08 10:47:14.348783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:53.963 [2024-10-08 10:47:14.348826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.778 ms 00:15:53.963 [2024-10-08 10:47:14.348835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.963 [2024-10-08 10:47:14.348969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.963 [2024-10-08 10:47:14.348982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:53.963 [2024-10-08 10:47:14.348994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:15:53.963 [2024-10-08 10:47:14.349001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.963 [2024-10-08 10:47:14.360655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.963 [2024-10-08 10:47:14.360699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:53.963 [2024-10-08 10:47:14.360712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.616 ms 00:15:53.963 [2024-10-08 10:47:14.360720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.963 [2024-10-08 10:47:14.360791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.963 [2024-10-08 10:47:14.360825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:53.963 [2024-10-08 10:47:14.360837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:53.963 [2024-10-08 10:47:14.360845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.963 [2024-10-08 10:47:14.361378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.963 [2024-10-08 10:47:14.361415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:53.963 [2024-10-08 10:47:14.361429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.500 ms 00:15:53.963 [2024-10-08 10:47:14.361438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.963 [2024-10-08 10:47:14.361595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.963 [2024-10-08 10:47:14.361605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:53.963 [2024-10-08 10:47:14.361619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:15:53.963 [2024-10-08 10:47:14.361629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.963 [2024-10-08 10:47:14.380426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.963 [2024-10-08 10:47:14.380490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:53.963 [2024-10-08 10:47:14.380510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.764 ms 00:15:53.963 [2024-10-08 10:47:14.380522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.963 [2024-10-08 10:47:14.384724] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:15:53.963 [2024-10-08 10:47:14.384786] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:15:53.963 [2024-10-08 10:47:14.384826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.963 [2024-10-08 10:47:14.384837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:15:53.963 [2024-10-08 10:47:14.384851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.128 ms 00:15:53.963 [2024-10-08 10:47:14.384860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.963 [2024-10-08 10:47:14.401193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.963 [2024-10-08 10:47:14.401243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:15:53.963 [2024-10-08 10:47:14.401261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.230 ms 00:15:53.963 [2024-10-08 10:47:14.401269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.963 [2024-10-08 10:47:14.404283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.963 [2024-10-08 10:47:14.404332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:15:53.963 [2024-10-08 10:47:14.404345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.913 ms 00:15:53.963 [2024-10-08 10:47:14.404352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.963 [2024-10-08 10:47:14.407144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.963 [2024-10-08 10:47:14.407324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:15:53.963 [2024-10-08 10:47:14.407349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.731 ms 00:15:53.963 [2024-10-08 10:47:14.407356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.963 [2024-10-08 10:47:14.407702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.963 [2024-10-08 10:47:14.407715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:53.963 [2024-10-08 10:47:14.407728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:15:53.963 [2024-10-08 10:47:14.407735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.963 [2024-10-08 10:47:14.432095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.963 [2024-10-08 10:47:14.432154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:15:53.963 [2024-10-08 10:47:14.432173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.324 ms 00:15:53.963 [2024-10-08 10:47:14.432182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.963 [2024-10-08 10:47:14.440491] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:53.963 [2024-10-08 10:47:14.458710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.963 [2024-10-08 10:47:14.458956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:53.963 [2024-10-08 10:47:14.458977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.437 ms 00:15:53.963 [2024-10-08 10:47:14.458988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.963 [2024-10-08 10:47:14.459079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.963 [2024-10-08 10:47:14.459093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:15:53.963 [2024-10-08 10:47:14.459104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:15:53.963 [2024-10-08 10:47:14.459117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.963 [2024-10-08 10:47:14.459175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.963 [2024-10-08 10:47:14.459185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:53.963 [2024-10-08 10:47:14.459197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:15:53.963 [2024-10-08 10:47:14.459207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.963 [2024-10-08 10:47:14.459233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.963 [2024-10-08 10:47:14.459246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:53.963 [2024-10-08 10:47:14.459254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:53.963 [2024-10-08 10:47:14.459264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.963 [2024-10-08 10:47:14.459301] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:15:53.963 [2024-10-08 10:47:14.459314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.963 [2024-10-08 10:47:14.459322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:15:53.963 [2024-10-08 10:47:14.459332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:15:53.963 [2024-10-08 10:47:14.459340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.963 [2024-10-08 10:47:14.465117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.963 [2024-10-08 10:47:14.465163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:53.963 [2024-10-08 10:47:14.465177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.740 ms 00:15:53.963 [2024-10-08 10:47:14.465185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.963 [2024-10-08 10:47:14.465279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:53.963 [2024-10-08 10:47:14.465289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:53.963 [2024-10-08 10:47:14.465300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:15:53.963 [2024-10-08 10:47:14.465308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:53.963 [2024-10-08 10:47:14.466346] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:53.963 [2024-10-08 10:47:14.467675] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 152.805 ms, result 0 00:15:53.963 [2024-10-08 10:47:14.470131] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:53.963 Some configs were skipped because the RPC state that can call them passed over. 00:15:53.963 10:47:14 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:15:54.224 [2024-10-08 10:47:14.707376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.224 [2024-10-08 10:47:14.707584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:15:54.224 [2024-10-08 10:47:14.707650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.102 ms 00:15:54.224 [2024-10-08 10:47:14.707678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.224 [2024-10-08 10:47:14.707737] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.465 ms, result 0 00:15:54.224 true 00:15:54.224 10:47:14 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:15:54.485 [2024-10-08 10:47:14.931419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.485 [2024-10-08 10:47:14.931603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:15:54.485 [2024-10-08 10:47:14.931671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.851 ms 00:15:54.485 [2024-10-08 10:47:14.931696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.485 [2024-10-08 10:47:14.931759] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.202 ms, result 0 00:15:54.485 true 00:15:54.485 10:47:14 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 86713 00:15:54.485 10:47:14 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 86713 ']' 00:15:54.485 10:47:14 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 86713 00:15:54.485 10:47:14 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:15:54.485 10:47:14 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:54.485 10:47:14 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86713 00:15:54.485 killing process with pid 86713 00:15:54.485 10:47:14 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:54.485 10:47:14 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:54.485 10:47:14 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86713' 00:15:54.485 10:47:14 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 86713 00:15:54.485 10:47:14 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 86713 00:15:54.748 [2024-10-08 10:47:15.096206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.748 [2024-10-08 10:47:15.096265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:54.748 [2024-10-08 10:47:15.096278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:54.748 [2024-10-08 10:47:15.096290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.748 [2024-10-08 10:47:15.096312] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:15:54.748 [2024-10-08 10:47:15.097005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.748 [2024-10-08 10:47:15.097118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:54.748 [2024-10-08 10:47:15.097144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:15:54.748 [2024-10-08 10:47:15.097166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.748 [2024-10-08 10:47:15.097479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.748 [2024-10-08 10:47:15.097512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:54.748 [2024-10-08 10:47:15.097535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:15:54.748 [2024-10-08 10:47:15.097555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.748 [2024-10-08 10:47:15.102044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.748 [2024-10-08 10:47:15.102166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:54.748 [2024-10-08 10:47:15.102225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.456 ms 00:15:54.748 [2024-10-08 10:47:15.102250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.748 [2024-10-08 10:47:15.109328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.748 [2024-10-08 10:47:15.109440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:54.748 [2024-10-08 10:47:15.109496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.970 ms 00:15:54.748 [2024-10-08 10:47:15.109518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.748 [2024-10-08 10:47:15.111772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.748 [2024-10-08 10:47:15.111909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:54.748 [2024-10-08 10:47:15.111927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.168 ms 00:15:54.748 [2024-10-08 10:47:15.111935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.748 [2024-10-08 10:47:15.116405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.748 [2024-10-08 10:47:15.116515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:54.748 [2024-10-08 10:47:15.116573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.430 ms 00:15:54.748 [2024-10-08 10:47:15.116595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.748 [2024-10-08 10:47:15.116737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.748 [2024-10-08 10:47:15.116762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:54.748 [2024-10-08 10:47:15.116785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:15:54.749 [2024-10-08 10:47:15.116909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.749 [2024-10-08 10:47:15.119514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.749 [2024-10-08 10:47:15.119625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:54.749 [2024-10-08 10:47:15.119680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.567 ms 00:15:54.749 [2024-10-08 10:47:15.119702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.749 [2024-10-08 10:47:15.122214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.749 [2024-10-08 10:47:15.122331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:54.749 [2024-10-08 10:47:15.122383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.351 ms 00:15:54.749 [2024-10-08 10:47:15.122405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.749 [2024-10-08 10:47:15.124378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.749 [2024-10-08 10:47:15.124492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:54.749 [2024-10-08 10:47:15.124544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.874 ms 00:15:54.749 [2024-10-08 10:47:15.124565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.749 [2024-10-08 10:47:15.126675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.749 [2024-10-08 10:47:15.126791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:54.749 [2024-10-08 10:47:15.126856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.976 ms 00:15:54.749 [2024-10-08 10:47:15.126865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.749 [2024-10-08 10:47:15.126932] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:54.749 [2024-10-08 10:47:15.126948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.126963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.126971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.126980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.126987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.126997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:54.749 [2024-10-08 10:47:15.127627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:54.750 [2024-10-08 10:47:15.127637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:54.750 [2024-10-08 10:47:15.127644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:54.750 [2024-10-08 10:47:15.127654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:54.750 [2024-10-08 10:47:15.127661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:54.750 [2024-10-08 10:47:15.127670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:54.750 [2024-10-08 10:47:15.127677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:54.750 [2024-10-08 10:47:15.127686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:54.750 [2024-10-08 10:47:15.127694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:54.750 [2024-10-08 10:47:15.127702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:54.750 [2024-10-08 10:47:15.127710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:54.750 [2024-10-08 10:47:15.127719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:54.750 [2024-10-08 10:47:15.127733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:54.750 [2024-10-08 10:47:15.127742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:54.750 [2024-10-08 10:47:15.127750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:54.750 [2024-10-08 10:47:15.127759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:54.750 [2024-10-08 10:47:15.127766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:54.750 [2024-10-08 10:47:15.127776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:54.750 [2024-10-08 10:47:15.127784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:54.750 [2024-10-08 10:47:15.127985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:54.750 [2024-10-08 10:47:15.128035] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:54.750 [2024-10-08 10:47:15.128059] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9e839222-06df-480c-bbd4-867c33d1e348 00:15:54.750 [2024-10-08 10:47:15.128089] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:54.750 [2024-10-08 10:47:15.128110] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:54.750 [2024-10-08 10:47:15.128132] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:54.750 [2024-10-08 10:47:15.128251] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:54.750 [2024-10-08 10:47:15.128274] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:54.750 [2024-10-08 10:47:15.128295] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:54.750 [2024-10-08 10:47:15.128318] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:54.750 [2024-10-08 10:47:15.128338] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:54.750 [2024-10-08 10:47:15.128357] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:54.750 [2024-10-08 10:47:15.128377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.750 [2024-10-08 10:47:15.128396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:54.750 [2024-10-08 10:47:15.128422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.447 ms 00:15:54.750 [2024-10-08 10:47:15.128441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.750 [2024-10-08 10:47:15.130150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.750 [2024-10-08 10:47:15.130254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:54.750 [2024-10-08 10:47:15.130304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.671 ms 00:15:54.750 [2024-10-08 10:47:15.130330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.750 [2024-10-08 10:47:15.130469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.750 [2024-10-08 10:47:15.130547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:54.750 [2024-10-08 10:47:15.130595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:15:54.750 [2024-10-08 10:47:15.130617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.750 [2024-10-08 10:47:15.136624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:54.750 [2024-10-08 10:47:15.136741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:54.750 [2024-10-08 10:47:15.136807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:54.750 [2024-10-08 10:47:15.136831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.750 [2024-10-08 10:47:15.136913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:54.750 [2024-10-08 10:47:15.136937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:54.750 [2024-10-08 10:47:15.136961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:54.750 [2024-10-08 10:47:15.136980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.750 [2024-10-08 10:47:15.137086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:54.750 [2024-10-08 10:47:15.137115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:54.750 [2024-10-08 10:47:15.137138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:54.750 [2024-10-08 10:47:15.137202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.750 [2024-10-08 10:47:15.137245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:54.750 [2024-10-08 10:47:15.137255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:54.750 [2024-10-08 10:47:15.137266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:54.750 [2024-10-08 10:47:15.137273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.750 [2024-10-08 10:47:15.147930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:54.750 [2024-10-08 10:47:15.147971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:54.750 [2024-10-08 10:47:15.147983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:54.750 [2024-10-08 10:47:15.147990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.750 [2024-10-08 10:47:15.156197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:54.750 [2024-10-08 10:47:15.156237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:54.750 [2024-10-08 10:47:15.156253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:54.750 [2024-10-08 10:47:15.156260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.750 [2024-10-08 10:47:15.156303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:54.750 [2024-10-08 10:47:15.156312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:54.750 [2024-10-08 10:47:15.156325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:54.750 [2024-10-08 10:47:15.156333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.750 [2024-10-08 10:47:15.156366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:54.750 [2024-10-08 10:47:15.156375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:54.750 [2024-10-08 10:47:15.156384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:54.750 [2024-10-08 10:47:15.156391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.750 [2024-10-08 10:47:15.156462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:54.750 [2024-10-08 10:47:15.156472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:54.750 [2024-10-08 10:47:15.156481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:54.750 [2024-10-08 10:47:15.156491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.750 [2024-10-08 10:47:15.156525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:54.750 [2024-10-08 10:47:15.156539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:54.750 [2024-10-08 10:47:15.156550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:54.750 [2024-10-08 10:47:15.156558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.750 [2024-10-08 10:47:15.156605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:54.750 [2024-10-08 10:47:15.156614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:54.750 [2024-10-08 10:47:15.156624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:54.750 [2024-10-08 10:47:15.156634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.750 [2024-10-08 10:47:15.156681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:54.750 [2024-10-08 10:47:15.156691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:54.750 [2024-10-08 10:47:15.156701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:54.750 [2024-10-08 10:47:15.156708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.750 [2024-10-08 10:47:15.156869] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 60.634 ms, result 0 00:15:55.012 10:47:15 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:15:55.012 10:47:15 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:55.012 [2024-10-08 10:47:15.437955] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:15:55.012 [2024-10-08 10:47:15.438100] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86749 ] 00:15:55.012 [2024-10-08 10:47:15.569096] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:55.296 [2024-10-08 10:47:15.588814] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.296 [2024-10-08 10:47:15.641420] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.296 [2024-10-08 10:47:15.754326] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:55.296 [2024-10-08 10:47:15.754414] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:15:55.568 [2024-10-08 10:47:15.917202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.568 [2024-10-08 10:47:15.917266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:55.568 [2024-10-08 10:47:15.917280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:55.568 [2024-10-08 10:47:15.917293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.568 [2024-10-08 10:47:15.919849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.568 [2024-10-08 10:47:15.919898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:55.568 [2024-10-08 10:47:15.919914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.534 ms 00:15:55.569 [2024-10-08 10:47:15.919922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.569 [2024-10-08 10:47:15.920035] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:55.569 [2024-10-08 10:47:15.920299] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:55.569 [2024-10-08 10:47:15.920315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.569 [2024-10-08 10:47:15.920323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:55.569 [2024-10-08 10:47:15.920337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:15:55.569 [2024-10-08 10:47:15.920345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.569 [2024-10-08 10:47:15.922125] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:15:55.569 [2024-10-08 10:47:15.926129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.569 [2024-10-08 10:47:15.926188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:15:55.569 [2024-10-08 10:47:15.926203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.006 ms 00:15:55.569 [2024-10-08 10:47:15.926211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.569 [2024-10-08 10:47:15.926315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.569 [2024-10-08 10:47:15.926327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:15:55.569 [2024-10-08 10:47:15.926337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:15:55.569 [2024-10-08 10:47:15.926345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.569 [2024-10-08 10:47:15.935051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.569 [2024-10-08 10:47:15.935095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:55.569 [2024-10-08 10:47:15.935105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.655 ms 00:15:55.569 [2024-10-08 10:47:15.935113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.569 [2024-10-08 10:47:15.935244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.569 [2024-10-08 10:47:15.935256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:55.569 [2024-10-08 10:47:15.935265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:15:55.569 [2024-10-08 10:47:15.935273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.569 [2024-10-08 10:47:15.935301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.569 [2024-10-08 10:47:15.935314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:55.569 [2024-10-08 10:47:15.935322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:55.569 [2024-10-08 10:47:15.935333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.569 [2024-10-08 10:47:15.935357] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:15:55.569 [2024-10-08 10:47:15.937501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.569 [2024-10-08 10:47:15.937679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:55.569 [2024-10-08 10:47:15.937697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.151 ms 00:15:55.569 [2024-10-08 10:47:15.937705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.569 [2024-10-08 10:47:15.937755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.569 [2024-10-08 10:47:15.937773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:55.569 [2024-10-08 10:47:15.937787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:15:55.569 [2024-10-08 10:47:15.937795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.569 [2024-10-08 10:47:15.937858] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:15:55.569 [2024-10-08 10:47:15.937881] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:15:55.569 [2024-10-08 10:47:15.937922] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:15:55.569 [2024-10-08 10:47:15.937938] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:15:55.569 [2024-10-08 10:47:15.938049] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:55.569 [2024-10-08 10:47:15.938061] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:55.569 [2024-10-08 10:47:15.938073] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:55.569 [2024-10-08 10:47:15.938087] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:55.569 [2024-10-08 10:47:15.938097] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:55.569 [2024-10-08 10:47:15.938105] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:15:55.569 [2024-10-08 10:47:15.938116] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:55.569 [2024-10-08 10:47:15.938124] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:55.569 [2024-10-08 10:47:15.938132] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:55.569 [2024-10-08 10:47:15.938143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.569 [2024-10-08 10:47:15.938156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:55.569 [2024-10-08 10:47:15.938164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:15:55.569 [2024-10-08 10:47:15.938172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.569 [2024-10-08 10:47:15.938260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.569 [2024-10-08 10:47:15.938270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:55.569 [2024-10-08 10:47:15.938280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:15:55.569 [2024-10-08 10:47:15.938288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.569 [2024-10-08 10:47:15.938394] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:55.569 [2024-10-08 10:47:15.938406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:55.569 [2024-10-08 10:47:15.938415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:55.569 [2024-10-08 10:47:15.938427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:55.569 [2024-10-08 10:47:15.938435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:55.569 [2024-10-08 10:47:15.938444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:55.569 [2024-10-08 10:47:15.938459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:15:55.569 [2024-10-08 10:47:15.938471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:55.569 [2024-10-08 10:47:15.938481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:15:55.569 [2024-10-08 10:47:15.938490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:55.569 [2024-10-08 10:47:15.938498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:55.569 [2024-10-08 10:47:15.938507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:15:55.569 [2024-10-08 10:47:15.938515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:55.569 [2024-10-08 10:47:15.938523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:55.569 [2024-10-08 10:47:15.938531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:15:55.569 [2024-10-08 10:47:15.938540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:55.569 [2024-10-08 10:47:15.938548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:55.569 [2024-10-08 10:47:15.938556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:15:55.569 [2024-10-08 10:47:15.938563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:55.569 [2024-10-08 10:47:15.938572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:55.569 [2024-10-08 10:47:15.938580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:15:55.569 [2024-10-08 10:47:15.938588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:55.569 [2024-10-08 10:47:15.938596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:55.569 [2024-10-08 10:47:15.938609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:15:55.569 [2024-10-08 10:47:15.938617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:55.569 [2024-10-08 10:47:15.938627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:55.569 [2024-10-08 10:47:15.938635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:15:55.569 [2024-10-08 10:47:15.938643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:55.569 [2024-10-08 10:47:15.938652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:55.569 [2024-10-08 10:47:15.938661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:15:55.569 [2024-10-08 10:47:15.938669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:55.569 [2024-10-08 10:47:15.938677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:55.569 [2024-10-08 10:47:15.938685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:15:55.569 [2024-10-08 10:47:15.938693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:55.569 [2024-10-08 10:47:15.938700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:55.569 [2024-10-08 10:47:15.938706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:15:55.569 [2024-10-08 10:47:15.938714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:55.569 [2024-10-08 10:47:15.938721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:55.569 [2024-10-08 10:47:15.938728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:15:55.569 [2024-10-08 10:47:15.938737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:55.569 [2024-10-08 10:47:15.938744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:55.569 [2024-10-08 10:47:15.938752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:15:55.569 [2024-10-08 10:47:15.938758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:55.569 [2024-10-08 10:47:15.938766] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:55.569 [2024-10-08 10:47:15.938774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:55.569 [2024-10-08 10:47:15.938782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:55.569 [2024-10-08 10:47:15.938790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:55.569 [2024-10-08 10:47:15.938813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:55.569 [2024-10-08 10:47:15.938821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:55.569 [2024-10-08 10:47:15.938828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:55.569 [2024-10-08 10:47:15.938835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:55.570 [2024-10-08 10:47:15.938842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:55.570 [2024-10-08 10:47:15.938848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:55.570 [2024-10-08 10:47:15.938858] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:55.570 [2024-10-08 10:47:15.938867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:55.570 [2024-10-08 10:47:15.938879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:15:55.570 [2024-10-08 10:47:15.938887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:15:55.570 [2024-10-08 10:47:15.938896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:15:55.570 [2024-10-08 10:47:15.938904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:15:55.570 [2024-10-08 10:47:15.938911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:15:55.570 [2024-10-08 10:47:15.938919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:15:55.570 [2024-10-08 10:47:15.938927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:15:55.570 [2024-10-08 10:47:15.938935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:15:55.570 [2024-10-08 10:47:15.938942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:15:55.570 [2024-10-08 10:47:15.938950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:15:55.570 [2024-10-08 10:47:15.938957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:15:55.570 [2024-10-08 10:47:15.938965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:15:55.570 [2024-10-08 10:47:15.938972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:15:55.570 [2024-10-08 10:47:15.938980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:15:55.570 [2024-10-08 10:47:15.938987] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:55.570 [2024-10-08 10:47:15.938995] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:55.570 [2024-10-08 10:47:15.939007] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:55.570 [2024-10-08 10:47:15.939014] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:55.570 [2024-10-08 10:47:15.939022] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:55.570 [2024-10-08 10:47:15.939029] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:55.570 [2024-10-08 10:47:15.939037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.570 [2024-10-08 10:47:15.939047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:55.570 [2024-10-08 10:47:15.939055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.714 ms 00:15:55.570 [2024-10-08 10:47:15.939066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.570 [2024-10-08 10:47:15.962949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.570 [2024-10-08 10:47:15.963008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:55.570 [2024-10-08 10:47:15.963023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.828 ms 00:15:55.570 [2024-10-08 10:47:15.963032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.570 [2024-10-08 10:47:15.963192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.570 [2024-10-08 10:47:15.963205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:55.570 [2024-10-08 10:47:15.963227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:15:55.570 [2024-10-08 10:47:15.963235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.570 [2024-10-08 10:47:15.975876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.570 [2024-10-08 10:47:15.975924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:55.570 [2024-10-08 10:47:15.975936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.614 ms 00:15:55.570 [2024-10-08 10:47:15.975945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.570 [2024-10-08 10:47:15.976031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.570 [2024-10-08 10:47:15.976046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:55.570 [2024-10-08 10:47:15.976064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:55.570 [2024-10-08 10:47:15.976073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.570 [2024-10-08 10:47:15.976598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.570 [2024-10-08 10:47:15.976634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:55.570 [2024-10-08 10:47:15.976647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.499 ms 00:15:55.570 [2024-10-08 10:47:15.976657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.570 [2024-10-08 10:47:15.976851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.570 [2024-10-08 10:47:15.976985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:55.570 [2024-10-08 10:47:15.977000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:15:55.570 [2024-10-08 10:47:15.977045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.570 [2024-10-08 10:47:15.984917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.570 [2024-10-08 10:47:15.984959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:55.570 [2024-10-08 10:47:15.984970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.841 ms 00:15:55.570 [2024-10-08 10:47:15.984984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.570 [2024-10-08 10:47:15.989041] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:15:55.570 [2024-10-08 10:47:15.989095] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:15:55.570 [2024-10-08 10:47:15.989109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.570 [2024-10-08 10:47:15.989118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:15:55.570 [2024-10-08 10:47:15.989127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.007 ms 00:15:55.570 [2024-10-08 10:47:15.989134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.570 [2024-10-08 10:47:16.005076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.570 [2024-10-08 10:47:16.005127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:15:55.570 [2024-10-08 10:47:16.005141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.859 ms 00:15:55.570 [2024-10-08 10:47:16.005150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.570 [2024-10-08 10:47:16.008120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.570 [2024-10-08 10:47:16.008302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:15:55.570 [2024-10-08 10:47:16.008321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.873 ms 00:15:55.570 [2024-10-08 10:47:16.008329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.570 [2024-10-08 10:47:16.011135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.570 [2024-10-08 10:47:16.011195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:15:55.570 [2024-10-08 10:47:16.011206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.758 ms 00:15:55.570 [2024-10-08 10:47:16.011213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.570 [2024-10-08 10:47:16.011564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.570 [2024-10-08 10:47:16.011578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:55.570 [2024-10-08 10:47:16.011587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:15:55.570 [2024-10-08 10:47:16.011595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.570 [2024-10-08 10:47:16.035320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.570 [2024-10-08 10:47:16.035550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:15:55.570 [2024-10-08 10:47:16.035583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.700 ms 00:15:55.570 [2024-10-08 10:47:16.035592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.570 [2024-10-08 10:47:16.044083] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:55.570 [2024-10-08 10:47:16.063847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.570 [2024-10-08 10:47:16.063896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:55.570 [2024-10-08 10:47:16.063909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.133 ms 00:15:55.570 [2024-10-08 10:47:16.063918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.570 [2024-10-08 10:47:16.064013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.570 [2024-10-08 10:47:16.064025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:15:55.570 [2024-10-08 10:47:16.064035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:15:55.570 [2024-10-08 10:47:16.064045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.570 [2024-10-08 10:47:16.064106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.570 [2024-10-08 10:47:16.064117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:55.570 [2024-10-08 10:47:16.064126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:15:55.570 [2024-10-08 10:47:16.064134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.570 [2024-10-08 10:47:16.064159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.570 [2024-10-08 10:47:16.064168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:55.570 [2024-10-08 10:47:16.064183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:55.570 [2024-10-08 10:47:16.064192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.570 [2024-10-08 10:47:16.064232] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:15:55.570 [2024-10-08 10:47:16.064246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.570 [2024-10-08 10:47:16.064254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:15:55.570 [2024-10-08 10:47:16.064267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:15:55.570 [2024-10-08 10:47:16.064275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.570 [2024-10-08 10:47:16.070528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.570 [2024-10-08 10:47:16.070577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:55.570 [2024-10-08 10:47:16.070588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.232 ms 00:15:55.570 [2024-10-08 10:47:16.070597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.570 [2024-10-08 10:47:16.070695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:55.570 [2024-10-08 10:47:16.070710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:55.571 [2024-10-08 10:47:16.070719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:15:55.571 [2024-10-08 10:47:16.070728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:55.571 [2024-10-08 10:47:16.073220] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:55.571 [2024-10-08 10:47:16.076247] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 155.148 ms, result 0 00:15:55.571 [2024-10-08 10:47:16.078010] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:55.571 [2024-10-08 10:47:16.085496] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:56.517  [2024-10-08T10:47:18.481Z] Copying: 19/256 [MB] (19 MBps) [2024-10-08T10:47:19.423Z] Copying: 36/256 [MB] (17 MBps) [2024-10-08T10:47:20.366Z] Copying: 50/256 [MB] (14 MBps) [2024-10-08T10:47:21.310Z] Copying: 64/256 [MB] (13 MBps) [2024-10-08T10:47:22.255Z] Copying: 77/256 [MB] (13 MBps) [2024-10-08T10:47:23.200Z] Copying: 95/256 [MB] (17 MBps) [2024-10-08T10:47:24.144Z] Copying: 106/256 [MB] (11 MBps) [2024-10-08T10:47:25.527Z] Copying: 118/256 [MB] (11 MBps) [2024-10-08T10:47:26.097Z] Copying: 133/256 [MB] (14 MBps) [2024-10-08T10:47:27.480Z] Copying: 143/256 [MB] (10 MBps) [2024-10-08T10:47:28.422Z] Copying: 153/256 [MB] (10 MBps) [2024-10-08T10:47:29.365Z] Copying: 164/256 [MB] (10 MBps) [2024-10-08T10:47:30.309Z] Copying: 175/256 [MB] (10 MBps) [2024-10-08T10:47:31.253Z] Copying: 185/256 [MB] (10 MBps) [2024-10-08T10:47:32.198Z] Copying: 195/256 [MB] (10 MBps) [2024-10-08T10:47:33.165Z] Copying: 211/256 [MB] (15 MBps) [2024-10-08T10:47:34.123Z] Copying: 222/256 [MB] (11 MBps) [2024-10-08T10:47:35.512Z] Copying: 233/256 [MB] (10 MBps) [2024-10-08T10:47:35.512Z] Copying: 251/256 [MB] (18 MBps) [2024-10-08T10:47:35.512Z] Copying: 256/256 [MB] (average 13 MBps)[2024-10-08 10:47:35.344647] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:14.935 [2024-10-08 10:47:35.346532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.935 [2024-10-08 10:47:35.346582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:14.935 [2024-10-08 10:47:35.346603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:14.935 [2024-10-08 10:47:35.346616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.935 [2024-10-08 10:47:35.346639] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:14.935 [2024-10-08 10:47:35.347337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.935 [2024-10-08 10:47:35.347388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:14.935 [2024-10-08 10:47:35.347401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.683 ms 00:16:14.935 [2024-10-08 10:47:35.347411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.935 [2024-10-08 10:47:35.347675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.935 [2024-10-08 10:47:35.347687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:14.935 [2024-10-08 10:47:35.347697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:16:14.935 [2024-10-08 10:47:35.347706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.935 [2024-10-08 10:47:35.351436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.935 [2024-10-08 10:47:35.351619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:14.935 [2024-10-08 10:47:35.351644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.710 ms 00:16:14.935 [2024-10-08 10:47:35.351653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.935 [2024-10-08 10:47:35.358657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.935 [2024-10-08 10:47:35.358845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:14.935 [2024-10-08 10:47:35.358865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.979 ms 00:16:14.935 [2024-10-08 10:47:35.358873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.935 [2024-10-08 10:47:35.361732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.935 [2024-10-08 10:47:35.361780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:14.935 [2024-10-08 10:47:35.361791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.791 ms 00:16:14.935 [2024-10-08 10:47:35.361815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.935 [2024-10-08 10:47:35.366664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.935 [2024-10-08 10:47:35.366716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:14.935 [2024-10-08 10:47:35.366736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.803 ms 00:16:14.935 [2024-10-08 10:47:35.366744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.935 [2024-10-08 10:47:35.366902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.935 [2024-10-08 10:47:35.366914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:14.935 [2024-10-08 10:47:35.366929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:16:14.935 [2024-10-08 10:47:35.366937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.935 [2024-10-08 10:47:35.369788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.935 [2024-10-08 10:47:35.369843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:14.935 [2024-10-08 10:47:35.369853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.828 ms 00:16:14.935 [2024-10-08 10:47:35.369861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.935 [2024-10-08 10:47:35.372653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.936 [2024-10-08 10:47:35.372696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:14.936 [2024-10-08 10:47:35.372705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.749 ms 00:16:14.936 [2024-10-08 10:47:35.372712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.936 [2024-10-08 10:47:35.375087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.936 [2024-10-08 10:47:35.375134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:14.936 [2024-10-08 10:47:35.375143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.333 ms 00:16:14.936 [2024-10-08 10:47:35.375149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.936 [2024-10-08 10:47:35.377226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.936 [2024-10-08 10:47:35.377270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:14.936 [2024-10-08 10:47:35.377281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.006 ms 00:16:14.936 [2024-10-08 10:47:35.377288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.936 [2024-10-08 10:47:35.377328] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:14.936 [2024-10-08 10:47:35.377343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:14.936 [2024-10-08 10:47:35.377931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:14.937 [2024-10-08 10:47:35.377939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:14.937 [2024-10-08 10:47:35.377947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:14.937 [2024-10-08 10:47:35.377955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:14.937 [2024-10-08 10:47:35.377962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:14.937 [2024-10-08 10:47:35.377971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:14.937 [2024-10-08 10:47:35.377978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:14.937 [2024-10-08 10:47:35.377986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:14.937 [2024-10-08 10:47:35.377994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:14.937 [2024-10-08 10:47:35.378002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:14.937 [2024-10-08 10:47:35.378010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:14.937 [2024-10-08 10:47:35.378018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:14.937 [2024-10-08 10:47:35.378025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:14.937 [2024-10-08 10:47:35.378033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:14.937 [2024-10-08 10:47:35.378041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:14.937 [2024-10-08 10:47:35.378048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:14.937 [2024-10-08 10:47:35.378056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:14.937 [2024-10-08 10:47:35.378063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:14.937 [2024-10-08 10:47:35.378071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:14.937 [2024-10-08 10:47:35.378078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:14.937 [2024-10-08 10:47:35.378088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:14.937 [2024-10-08 10:47:35.378096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:14.937 [2024-10-08 10:47:35.378103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:14.937 [2024-10-08 10:47:35.378110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:14.937 [2024-10-08 10:47:35.378119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:14.937 [2024-10-08 10:47:35.378126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:14.937 [2024-10-08 10:47:35.378142] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:14.937 [2024-10-08 10:47:35.378151] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9e839222-06df-480c-bbd4-867c33d1e348 00:16:14.937 [2024-10-08 10:47:35.378160] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:14.937 [2024-10-08 10:47:35.378167] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:14.937 [2024-10-08 10:47:35.378175] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:14.937 [2024-10-08 10:47:35.378183] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:14.937 [2024-10-08 10:47:35.378190] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:14.937 [2024-10-08 10:47:35.378206] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:14.937 [2024-10-08 10:47:35.378214] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:14.937 [2024-10-08 10:47:35.378221] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:14.937 [2024-10-08 10:47:35.378227] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:14.937 [2024-10-08 10:47:35.378240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.937 [2024-10-08 10:47:35.378247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:14.937 [2024-10-08 10:47:35.378259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.912 ms 00:16:14.937 [2024-10-08 10:47:35.378267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.937 [2024-10-08 10:47:35.380427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.937 [2024-10-08 10:47:35.380574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:14.937 [2024-10-08 10:47:35.380592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.141 ms 00:16:14.937 [2024-10-08 10:47:35.380601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.937 [2024-10-08 10:47:35.380732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.937 [2024-10-08 10:47:35.380743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:14.937 [2024-10-08 10:47:35.380752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:16:14.937 [2024-10-08 10:47:35.380759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.937 [2024-10-08 10:47:35.387934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.937 [2024-10-08 10:47:35.387978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:14.937 [2024-10-08 10:47:35.387989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.937 [2024-10-08 10:47:35.387997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.937 [2024-10-08 10:47:35.388069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.937 [2024-10-08 10:47:35.388077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:14.937 [2024-10-08 10:47:35.388086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.937 [2024-10-08 10:47:35.388094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.937 [2024-10-08 10:47:35.388142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.937 [2024-10-08 10:47:35.388152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:14.937 [2024-10-08 10:47:35.388160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.937 [2024-10-08 10:47:35.388167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.937 [2024-10-08 10:47:35.388185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.937 [2024-10-08 10:47:35.388200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:14.937 [2024-10-08 10:47:35.388207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.937 [2024-10-08 10:47:35.388215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.937 [2024-10-08 10:47:35.401665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.937 [2024-10-08 10:47:35.401717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:14.937 [2024-10-08 10:47:35.401728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.937 [2024-10-08 10:47:35.401737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.937 [2024-10-08 10:47:35.412649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.937 [2024-10-08 10:47:35.412711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:14.937 [2024-10-08 10:47:35.412722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.937 [2024-10-08 10:47:35.412732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.937 [2024-10-08 10:47:35.412833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.937 [2024-10-08 10:47:35.412845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:14.937 [2024-10-08 10:47:35.412854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.937 [2024-10-08 10:47:35.412863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.937 [2024-10-08 10:47:35.412896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.937 [2024-10-08 10:47:35.412905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:14.937 [2024-10-08 10:47:35.412914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.937 [2024-10-08 10:47:35.412927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.937 [2024-10-08 10:47:35.413002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.937 [2024-10-08 10:47:35.413012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:14.937 [2024-10-08 10:47:35.413021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.937 [2024-10-08 10:47:35.413030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.937 [2024-10-08 10:47:35.413090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.937 [2024-10-08 10:47:35.413101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:14.937 [2024-10-08 10:47:35.413109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.937 [2024-10-08 10:47:35.413123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.937 [2024-10-08 10:47:35.413167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.937 [2024-10-08 10:47:35.413178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:14.937 [2024-10-08 10:47:35.413186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.937 [2024-10-08 10:47:35.413195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.937 [2024-10-08 10:47:35.413245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.937 [2024-10-08 10:47:35.413255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:14.937 [2024-10-08 10:47:35.413266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.937 [2024-10-08 10:47:35.413278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.937 [2024-10-08 10:47:35.413439] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 66.881 ms, result 0 00:16:15.199 00:16:15.199 00:16:15.199 10:47:35 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:16:15.199 10:47:35 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:15.771 10:47:36 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:15.771 [2024-10-08 10:47:36.323780] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:16:15.771 [2024-10-08 10:47:36.323941] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86969 ] 00:16:16.033 [2024-10-08 10:47:36.456794] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:16.033 [2024-10-08 10:47:36.477452] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.033 [2024-10-08 10:47:36.527344] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.296 [2024-10-08 10:47:36.641498] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:16.296 [2024-10-08 10:47:36.641582] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:16.296 [2024-10-08 10:47:36.802030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.296 [2024-10-08 10:47:36.802271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:16.296 [2024-10-08 10:47:36.802297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:16.296 [2024-10-08 10:47:36.802312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.296 [2024-10-08 10:47:36.804850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.296 [2024-10-08 10:47:36.804893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:16.296 [2024-10-08 10:47:36.804906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.511 ms 00:16:16.296 [2024-10-08 10:47:36.804914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.296 [2024-10-08 10:47:36.805020] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:16.296 [2024-10-08 10:47:36.805302] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:16.296 [2024-10-08 10:47:36.805318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.296 [2024-10-08 10:47:36.805326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:16.296 [2024-10-08 10:47:36.805343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:16:16.296 [2024-10-08 10:47:36.805351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.296 [2024-10-08 10:47:36.807361] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:16.296 [2024-10-08 10:47:36.811109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.296 [2024-10-08 10:47:36.811267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:16.296 [2024-10-08 10:47:36.811337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.751 ms 00:16:16.296 [2024-10-08 10:47:36.811363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.296 [2024-10-08 10:47:36.811442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.296 [2024-10-08 10:47:36.811471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:16.296 [2024-10-08 10:47:36.811492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:16:16.296 [2024-10-08 10:47:36.811511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.296 [2024-10-08 10:47:36.819348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.296 [2024-10-08 10:47:36.819498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:16.296 [2024-10-08 10:47:36.819557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.780 ms 00:16:16.296 [2024-10-08 10:47:36.819580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.296 [2024-10-08 10:47:36.819729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.296 [2024-10-08 10:47:36.819845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:16.296 [2024-10-08 10:47:36.819872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:16:16.296 [2024-10-08 10:47:36.819892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.296 [2024-10-08 10:47:36.820367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.296 [2024-10-08 10:47:36.820423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:16.296 [2024-10-08 10:47:36.820439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:16.296 [2024-10-08 10:47:36.820451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.296 [2024-10-08 10:47:36.820480] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:16.296 [2024-10-08 10:47:36.822519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.296 [2024-10-08 10:47:36.822557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:16.296 [2024-10-08 10:47:36.822568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.046 ms 00:16:16.296 [2024-10-08 10:47:36.822576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.296 [2024-10-08 10:47:36.822618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.296 [2024-10-08 10:47:36.822634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:16.296 [2024-10-08 10:47:36.822643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:16.296 [2024-10-08 10:47:36.822651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.296 [2024-10-08 10:47:36.822670] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:16.296 [2024-10-08 10:47:36.822696] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:16.296 [2024-10-08 10:47:36.822733] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:16.296 [2024-10-08 10:47:36.822751] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:16.296 [2024-10-08 10:47:36.822885] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:16.296 [2024-10-08 10:47:36.822897] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:16.296 [2024-10-08 10:47:36.822909] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:16.296 [2024-10-08 10:47:36.822919] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:16.296 [2024-10-08 10:47:36.822928] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:16.296 [2024-10-08 10:47:36.822942] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:16.296 [2024-10-08 10:47:36.822950] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:16.296 [2024-10-08 10:47:36.822958] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:16.296 [2024-10-08 10:47:36.822965] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:16.296 [2024-10-08 10:47:36.822975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.296 [2024-10-08 10:47:36.822985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:16.296 [2024-10-08 10:47:36.822993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:16:16.296 [2024-10-08 10:47:36.823000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.296 [2024-10-08 10:47:36.823092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.296 [2024-10-08 10:47:36.823101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:16.296 [2024-10-08 10:47:36.823108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:16:16.296 [2024-10-08 10:47:36.823116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.296 [2024-10-08 10:47:36.823216] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:16.296 [2024-10-08 10:47:36.823228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:16.296 [2024-10-08 10:47:36.823239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:16.296 [2024-10-08 10:47:36.823251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:16.296 [2024-10-08 10:47:36.823262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:16.296 [2024-10-08 10:47:36.823270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:16.296 [2024-10-08 10:47:36.823285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:16.296 [2024-10-08 10:47:36.823295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:16.296 [2024-10-08 10:47:36.823303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:16.297 [2024-10-08 10:47:36.823311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:16.297 [2024-10-08 10:47:36.823319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:16.297 [2024-10-08 10:47:36.823326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:16.297 [2024-10-08 10:47:36.823333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:16.297 [2024-10-08 10:47:36.823342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:16.297 [2024-10-08 10:47:36.823350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:16.297 [2024-10-08 10:47:36.823358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:16.297 [2024-10-08 10:47:36.823367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:16.297 [2024-10-08 10:47:36.823375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:16.297 [2024-10-08 10:47:36.823385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:16.297 [2024-10-08 10:47:36.823394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:16.297 [2024-10-08 10:47:36.823402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:16.297 [2024-10-08 10:47:36.823410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:16.297 [2024-10-08 10:47:36.823417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:16.297 [2024-10-08 10:47:36.823430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:16.297 [2024-10-08 10:47:36.823438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:16.297 [2024-10-08 10:47:36.823446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:16.297 [2024-10-08 10:47:36.823453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:16.297 [2024-10-08 10:47:36.823461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:16.297 [2024-10-08 10:47:36.823469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:16.297 [2024-10-08 10:47:36.823477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:16.297 [2024-10-08 10:47:36.823484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:16.297 [2024-10-08 10:47:36.823492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:16.297 [2024-10-08 10:47:36.823499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:16.297 [2024-10-08 10:47:36.823508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:16.297 [2024-10-08 10:47:36.823517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:16.297 [2024-10-08 10:47:36.823525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:16.297 [2024-10-08 10:47:36.823532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:16.297 [2024-10-08 10:47:36.823541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:16.297 [2024-10-08 10:47:36.823548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:16.297 [2024-10-08 10:47:36.823560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:16.297 [2024-10-08 10:47:36.823569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:16.297 [2024-10-08 10:47:36.823577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:16.297 [2024-10-08 10:47:36.823584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:16.297 [2024-10-08 10:47:36.823591] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:16.297 [2024-10-08 10:47:36.823599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:16.297 [2024-10-08 10:47:36.823607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:16.297 [2024-10-08 10:47:36.823614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:16.297 [2024-10-08 10:47:36.823622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:16.297 [2024-10-08 10:47:36.823630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:16.297 [2024-10-08 10:47:36.823637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:16.297 [2024-10-08 10:47:36.823645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:16.297 [2024-10-08 10:47:36.823651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:16.297 [2024-10-08 10:47:36.823658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:16.297 [2024-10-08 10:47:36.823667] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:16.297 [2024-10-08 10:47:36.823680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:16.297 [2024-10-08 10:47:36.823690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:16.297 [2024-10-08 10:47:36.823698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:16.297 [2024-10-08 10:47:36.823704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:16.297 [2024-10-08 10:47:36.823712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:16.297 [2024-10-08 10:47:36.823719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:16.297 [2024-10-08 10:47:36.823725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:16.297 [2024-10-08 10:47:36.823732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:16.297 [2024-10-08 10:47:36.823739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:16.297 [2024-10-08 10:47:36.823746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:16.297 [2024-10-08 10:47:36.823754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:16.297 [2024-10-08 10:47:36.823761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:16.297 [2024-10-08 10:47:36.823768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:16.297 [2024-10-08 10:47:36.823775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:16.297 [2024-10-08 10:47:36.823782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:16.297 [2024-10-08 10:47:36.823824] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:16.297 [2024-10-08 10:47:36.823832] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:16.297 [2024-10-08 10:47:36.823843] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:16.297 [2024-10-08 10:47:36.823851] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:16.297 [2024-10-08 10:47:36.823858] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:16.297 [2024-10-08 10:47:36.823865] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:16.297 [2024-10-08 10:47:36.823872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.297 [2024-10-08 10:47:36.823882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:16.297 [2024-10-08 10:47:36.823894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.726 ms 00:16:16.297 [2024-10-08 10:47:36.823901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.297 [2024-10-08 10:47:36.845783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.297 [2024-10-08 10:47:36.845865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:16.297 [2024-10-08 10:47:36.845884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.809 ms 00:16:16.297 [2024-10-08 10:47:36.845897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.297 [2024-10-08 10:47:36.846080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.297 [2024-10-08 10:47:36.846096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:16.297 [2024-10-08 10:47:36.846114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:16:16.297 [2024-10-08 10:47:36.846124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.297 [2024-10-08 10:47:36.857942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.297 [2024-10-08 10:47:36.857984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:16.297 [2024-10-08 10:47:36.857996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.790 ms 00:16:16.297 [2024-10-08 10:47:36.858003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.297 [2024-10-08 10:47:36.858073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.297 [2024-10-08 10:47:36.858084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:16.297 [2024-10-08 10:47:36.858096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:16.297 [2024-10-08 10:47:36.858104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.297 [2024-10-08 10:47:36.858610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.297 [2024-10-08 10:47:36.858640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:16.297 [2024-10-08 10:47:36.858651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.478 ms 00:16:16.297 [2024-10-08 10:47:36.858666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.297 [2024-10-08 10:47:36.858861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.297 [2024-10-08 10:47:36.858879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:16.297 [2024-10-08 10:47:36.858890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:16:16.297 [2024-10-08 10:47:36.858901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.297 [2024-10-08 10:47:36.866303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.297 [2024-10-08 10:47:36.866345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:16.297 [2024-10-08 10:47:36.866363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.375 ms 00:16:16.297 [2024-10-08 10:47:36.866374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.559 [2024-10-08 10:47:36.870202] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:16.559 [2024-10-08 10:47:36.870252] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:16.559 [2024-10-08 10:47:36.870265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.559 [2024-10-08 10:47:36.870273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:16.560 [2024-10-08 10:47:36.870282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.798 ms 00:16:16.560 [2024-10-08 10:47:36.870290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.560 [2024-10-08 10:47:36.885998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.560 [2024-10-08 10:47:36.886053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:16.560 [2024-10-08 10:47:36.886065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.645 ms 00:16:16.560 [2024-10-08 10:47:36.886073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.560 [2024-10-08 10:47:36.889256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.560 [2024-10-08 10:47:36.889311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:16.560 [2024-10-08 10:47:36.889323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.093 ms 00:16:16.560 [2024-10-08 10:47:36.889330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.560 [2024-10-08 10:47:36.891826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.560 [2024-10-08 10:47:36.891877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:16.560 [2024-10-08 10:47:36.891887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.436 ms 00:16:16.560 [2024-10-08 10:47:36.891895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.560 [2024-10-08 10:47:36.892239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.560 [2024-10-08 10:47:36.892251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:16.560 [2024-10-08 10:47:36.892263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:16:16.560 [2024-10-08 10:47:36.892278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.560 [2024-10-08 10:47:36.915047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.560 [2024-10-08 10:47:36.915274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:16.560 [2024-10-08 10:47:36.915296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.746 ms 00:16:16.560 [2024-10-08 10:47:36.915307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.560 [2024-10-08 10:47:36.923558] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:16.560 [2024-10-08 10:47:36.942567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.560 [2024-10-08 10:47:36.942745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:16.560 [2024-10-08 10:47:36.942765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.175 ms 00:16:16.560 [2024-10-08 10:47:36.942774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.560 [2024-10-08 10:47:36.942886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.560 [2024-10-08 10:47:36.942898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:16.560 [2024-10-08 10:47:36.942908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:16:16.560 [2024-10-08 10:47:36.942916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.560 [2024-10-08 10:47:36.942978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.560 [2024-10-08 10:47:36.942994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:16.560 [2024-10-08 10:47:36.943003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:16:16.560 [2024-10-08 10:47:36.943012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.560 [2024-10-08 10:47:36.943039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.560 [2024-10-08 10:47:36.943048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:16.560 [2024-10-08 10:47:36.943056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:16.560 [2024-10-08 10:47:36.943065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.560 [2024-10-08 10:47:36.943103] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:16.560 [2024-10-08 10:47:36.943116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.560 [2024-10-08 10:47:36.943125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:16.560 [2024-10-08 10:47:36.943133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:16.560 [2024-10-08 10:47:36.943141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.560 [2024-10-08 10:47:36.949072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.560 [2024-10-08 10:47:36.949233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:16.560 [2024-10-08 10:47:36.949252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.878 ms 00:16:16.560 [2024-10-08 10:47:36.949261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.560 [2024-10-08 10:47:36.949350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.560 [2024-10-08 10:47:36.949364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:16.560 [2024-10-08 10:47:36.949373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:16:16.560 [2024-10-08 10:47:36.949382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.560 [2024-10-08 10:47:36.950471] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:16.560 [2024-10-08 10:47:36.951787] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 148.122 ms, result 0 00:16:16.560 [2024-10-08 10:47:36.952723] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:16.560 [2024-10-08 10:47:36.960447] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:16.823  [2024-10-08T10:47:37.400Z] Copying: 4096/4096 [kB] (average 16 MBps)[2024-10-08 10:47:37.205203] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:16.823 [2024-10-08 10:47:37.206257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.823 [2024-10-08 10:47:37.206311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:16.823 [2024-10-08 10:47:37.206327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:16.823 [2024-10-08 10:47:37.206337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.823 [2024-10-08 10:47:37.206359] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:16.823 [2024-10-08 10:47:37.207103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.823 [2024-10-08 10:47:37.207144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:16.823 [2024-10-08 10:47:37.207170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.729 ms 00:16:16.823 [2024-10-08 10:47:37.207181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.823 [2024-10-08 10:47:37.210163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.823 [2024-10-08 10:47:37.210320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:16.823 [2024-10-08 10:47:37.210339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.940 ms 00:16:16.823 [2024-10-08 10:47:37.210347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.823 [2024-10-08 10:47:37.214756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.823 [2024-10-08 10:47:37.214791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:16.823 [2024-10-08 10:47:37.214817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.382 ms 00:16:16.823 [2024-10-08 10:47:37.214825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.823 [2024-10-08 10:47:37.221920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.823 [2024-10-08 10:47:37.221972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:16.823 [2024-10-08 10:47:37.221982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.062 ms 00:16:16.823 [2024-10-08 10:47:37.221990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.823 [2024-10-08 10:47:37.225126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.823 [2024-10-08 10:47:37.225286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:16.823 [2024-10-08 10:47:37.225305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.083 ms 00:16:16.823 [2024-10-08 10:47:37.225312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.823 [2024-10-08 10:47:37.229830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.823 [2024-10-08 10:47:37.229874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:16.823 [2024-10-08 10:47:37.229891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.431 ms 00:16:16.823 [2024-10-08 10:47:37.229900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.823 [2024-10-08 10:47:37.230104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.823 [2024-10-08 10:47:37.230116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:16.823 [2024-10-08 10:47:37.230125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:16:16.823 [2024-10-08 10:47:37.230132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.823 [2024-10-08 10:47:37.232938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.823 [2024-10-08 10:47:37.232982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:16.823 [2024-10-08 10:47:37.232992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.784 ms 00:16:16.823 [2024-10-08 10:47:37.232999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.823 [2024-10-08 10:47:37.235596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.823 [2024-10-08 10:47:37.235641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:16.823 [2024-10-08 10:47:37.235650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.556 ms 00:16:16.823 [2024-10-08 10:47:37.235657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.823 [2024-10-08 10:47:37.237955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.823 [2024-10-08 10:47:37.237999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:16.823 [2024-10-08 10:47:37.238009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.257 ms 00:16:16.823 [2024-10-08 10:47:37.238016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.823 [2024-10-08 10:47:37.240443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.823 [2024-10-08 10:47:37.240490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:16.823 [2024-10-08 10:47:37.240499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.338 ms 00:16:16.823 [2024-10-08 10:47:37.240506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.823 [2024-10-08 10:47:37.240545] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:16.823 [2024-10-08 10:47:37.240569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:16.823 [2024-10-08 10:47:37.240578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:16.823 [2024-10-08 10:47:37.240586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:16.823 [2024-10-08 10:47:37.240594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:16.823 [2024-10-08 10:47:37.240602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:16.823 [2024-10-08 10:47:37.240609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:16.823 [2024-10-08 10:47:37.240617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:16.823 [2024-10-08 10:47:37.240624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:16.823 [2024-10-08 10:47:37.240632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:16.823 [2024-10-08 10:47:37.240641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:16.823 [2024-10-08 10:47:37.240649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:16.823 [2024-10-08 10:47:37.240656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:16.823 [2024-10-08 10:47:37.240663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:16.823 [2024-10-08 10:47:37.240671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:16.823 [2024-10-08 10:47:37.240679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:16.823 [2024-10-08 10:47:37.240686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:16.823 [2024-10-08 10:47:37.240693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:16.823 [2024-10-08 10:47:37.240701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:16.823 [2024-10-08 10:47:37.240708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:16.823 [2024-10-08 10:47:37.240715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:16.823 [2024-10-08 10:47:37.240723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:16.823 [2024-10-08 10:47:37.240730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.240737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.240744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.240751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.240759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.240768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.240775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.240783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.240790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.240819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.240828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.240835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.240843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.240851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.240859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.240867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.240875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.240882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.240890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.240898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.240905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.240914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.240922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.240930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.240937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.240945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.240953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.240960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.240967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.240974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.240982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.240989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.240997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:16.824 [2024-10-08 10:47:37.241390] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:16.824 [2024-10-08 10:47:37.241399] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9e839222-06df-480c-bbd4-867c33d1e348 00:16:16.824 [2024-10-08 10:47:37.241409] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:16.824 [2024-10-08 10:47:37.241417] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:16.824 [2024-10-08 10:47:37.241425] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:16.824 [2024-10-08 10:47:37.241433] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:16.824 [2024-10-08 10:47:37.241441] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:16.824 [2024-10-08 10:47:37.241472] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:16.824 [2024-10-08 10:47:37.241480] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:16.824 [2024-10-08 10:47:37.241487] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:16.824 [2024-10-08 10:47:37.241493] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:16.824 [2024-10-08 10:47:37.241501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.824 [2024-10-08 10:47:37.241512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:16.824 [2024-10-08 10:47:37.241521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.957 ms 00:16:16.824 [2024-10-08 10:47:37.241529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.824 [2024-10-08 10:47:37.243615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.824 [2024-10-08 10:47:37.243761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:16.824 [2024-10-08 10:47:37.243786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.067 ms 00:16:16.824 [2024-10-08 10:47:37.243809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.825 [2024-10-08 10:47:37.243933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:16.825 [2024-10-08 10:47:37.243947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:16.825 [2024-10-08 10:47:37.243956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:16:16.825 [2024-10-08 10:47:37.243963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.825 [2024-10-08 10:47:37.251163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:16.825 [2024-10-08 10:47:37.251208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:16.825 [2024-10-08 10:47:37.251218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:16.825 [2024-10-08 10:47:37.251234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.825 [2024-10-08 10:47:37.251317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:16.825 [2024-10-08 10:47:37.251326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:16.825 [2024-10-08 10:47:37.251339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:16.825 [2024-10-08 10:47:37.251346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.825 [2024-10-08 10:47:37.251391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:16.825 [2024-10-08 10:47:37.251400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:16.825 [2024-10-08 10:47:37.251408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:16.825 [2024-10-08 10:47:37.251415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.825 [2024-10-08 10:47:37.251432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:16.825 [2024-10-08 10:47:37.251443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:16.825 [2024-10-08 10:47:37.251450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:16.825 [2024-10-08 10:47:37.251457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.825 [2024-10-08 10:47:37.264990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:16.825 [2024-10-08 10:47:37.265054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:16.825 [2024-10-08 10:47:37.265070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:16.825 [2024-10-08 10:47:37.265084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.825 [2024-10-08 10:47:37.276047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:16.825 [2024-10-08 10:47:37.276107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:16.825 [2024-10-08 10:47:37.276118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:16.825 [2024-10-08 10:47:37.276126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.825 [2024-10-08 10:47:37.276173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:16.825 [2024-10-08 10:47:37.276182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:16.825 [2024-10-08 10:47:37.276197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:16.825 [2024-10-08 10:47:37.276206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.825 [2024-10-08 10:47:37.276240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:16.825 [2024-10-08 10:47:37.276249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:16.825 [2024-10-08 10:47:37.276260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:16.825 [2024-10-08 10:47:37.276268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.825 [2024-10-08 10:47:37.276347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:16.825 [2024-10-08 10:47:37.276358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:16.825 [2024-10-08 10:47:37.276366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:16.825 [2024-10-08 10:47:37.276375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.825 [2024-10-08 10:47:37.276406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:16.825 [2024-10-08 10:47:37.276416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:16.825 [2024-10-08 10:47:37.276426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:16.825 [2024-10-08 10:47:37.276437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.825 [2024-10-08 10:47:37.276482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:16.825 [2024-10-08 10:47:37.276491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:16.825 [2024-10-08 10:47:37.276499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:16.825 [2024-10-08 10:47:37.276508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.825 [2024-10-08 10:47:37.276562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:16.825 [2024-10-08 10:47:37.276573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:16.825 [2024-10-08 10:47:37.276585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:16.825 [2024-10-08 10:47:37.276594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:16.825 [2024-10-08 10:47:37.276750] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 70.460 ms, result 0 00:16:17.086 00:16:17.086 00:16:17.086 10:47:37 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=86990 00:16:17.086 10:47:37 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 86990 00:16:17.086 10:47:37 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 86990 ']' 00:16:17.086 10:47:37 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:17.087 10:47:37 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.087 10:47:37 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:17.087 10:47:37 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.087 10:47:37 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:17.087 10:47:37 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:17.087 [2024-10-08 10:47:37.623517] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:16:17.087 [2024-10-08 10:47:37.623650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86990 ] 00:16:17.347 [2024-10-08 10:47:37.754963] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:17.347 [2024-10-08 10:47:37.776172] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.347 [2024-10-08 10:47:37.826429] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.928 10:47:38 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:17.928 10:47:38 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:16:17.928 10:47:38 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:18.191 [2024-10-08 10:47:38.694909] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:18.191 [2024-10-08 10:47:38.694985] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:18.454 [2024-10-08 10:47:38.871585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.454 [2024-10-08 10:47:38.871646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:18.454 [2024-10-08 10:47:38.871664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:18.454 [2024-10-08 10:47:38.871673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.454 [2024-10-08 10:47:38.874227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.454 [2024-10-08 10:47:38.874276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:18.454 [2024-10-08 10:47:38.874289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.531 ms 00:16:18.454 [2024-10-08 10:47:38.874297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.454 [2024-10-08 10:47:38.874393] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:18.454 [2024-10-08 10:47:38.874657] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:18.454 [2024-10-08 10:47:38.874686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.454 [2024-10-08 10:47:38.874695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:18.454 [2024-10-08 10:47:38.874706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:16:18.454 [2024-10-08 10:47:38.874718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.454 [2024-10-08 10:47:38.876453] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:18.454 [2024-10-08 10:47:38.880195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.454 [2024-10-08 10:47:38.880250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:18.454 [2024-10-08 10:47:38.880261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.750 ms 00:16:18.454 [2024-10-08 10:47:38.880271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.454 [2024-10-08 10:47:38.880346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.454 [2024-10-08 10:47:38.880361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:18.454 [2024-10-08 10:47:38.880370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:16:18.454 [2024-10-08 10:47:38.880379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.454 [2024-10-08 10:47:38.888335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.454 [2024-10-08 10:47:38.888530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:18.454 [2024-10-08 10:47:38.888554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.886 ms 00:16:18.454 [2024-10-08 10:47:38.888565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.454 [2024-10-08 10:47:38.888695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.454 [2024-10-08 10:47:38.888708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:18.454 [2024-10-08 10:47:38.888719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:16:18.454 [2024-10-08 10:47:38.888732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.454 [2024-10-08 10:47:38.888757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.454 [2024-10-08 10:47:38.888767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:18.454 [2024-10-08 10:47:38.888781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:18.454 [2024-10-08 10:47:38.888790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.454 [2024-10-08 10:47:38.888837] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:18.454 [2024-10-08 10:47:38.890783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.454 [2024-10-08 10:47:38.890850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:18.454 [2024-10-08 10:47:38.890862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.945 ms 00:16:18.454 [2024-10-08 10:47:38.890876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.454 [2024-10-08 10:47:38.890917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.454 [2024-10-08 10:47:38.890925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:18.454 [2024-10-08 10:47:38.890936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:18.454 [2024-10-08 10:47:38.890943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.454 [2024-10-08 10:47:38.890966] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:18.454 [2024-10-08 10:47:38.890987] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:18.454 [2024-10-08 10:47:38.891034] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:18.454 [2024-10-08 10:47:38.891052] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:18.454 [2024-10-08 10:47:38.891160] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:18.454 [2024-10-08 10:47:38.891171] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:18.454 [2024-10-08 10:47:38.891186] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:18.454 [2024-10-08 10:47:38.891197] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:18.454 [2024-10-08 10:47:38.891211] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:18.454 [2024-10-08 10:47:38.891219] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:18.454 [2024-10-08 10:47:38.891232] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:18.454 [2024-10-08 10:47:38.891241] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:18.454 [2024-10-08 10:47:38.891252] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:18.454 [2024-10-08 10:47:38.891260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.454 [2024-10-08 10:47:38.891272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:18.454 [2024-10-08 10:47:38.891281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:16:18.454 [2024-10-08 10:47:38.891289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.454 [2024-10-08 10:47:38.891376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.454 [2024-10-08 10:47:38.891387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:18.454 [2024-10-08 10:47:38.891395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:18.454 [2024-10-08 10:47:38.891404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.454 [2024-10-08 10:47:38.891507] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:18.454 [2024-10-08 10:47:38.891523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:18.454 [2024-10-08 10:47:38.891532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:18.454 [2024-10-08 10:47:38.891546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:18.454 [2024-10-08 10:47:38.891555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:18.454 [2024-10-08 10:47:38.891566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:18.454 [2024-10-08 10:47:38.891574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:18.454 [2024-10-08 10:47:38.891585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:18.454 [2024-10-08 10:47:38.891593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:18.454 [2024-10-08 10:47:38.891603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:18.454 [2024-10-08 10:47:38.891610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:18.454 [2024-10-08 10:47:38.891621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:18.454 [2024-10-08 10:47:38.891628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:18.454 [2024-10-08 10:47:38.891643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:18.454 [2024-10-08 10:47:38.891651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:18.454 [2024-10-08 10:47:38.891660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:18.454 [2024-10-08 10:47:38.891668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:18.454 [2024-10-08 10:47:38.891677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:18.454 [2024-10-08 10:47:38.891686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:18.454 [2024-10-08 10:47:38.891699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:18.454 [2024-10-08 10:47:38.891707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:18.454 [2024-10-08 10:47:38.891717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:18.454 [2024-10-08 10:47:38.891725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:18.454 [2024-10-08 10:47:38.891734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:18.454 [2024-10-08 10:47:38.891742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:18.454 [2024-10-08 10:47:38.891751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:18.454 [2024-10-08 10:47:38.891759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:18.454 [2024-10-08 10:47:38.891770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:18.454 [2024-10-08 10:47:38.891777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:18.454 [2024-10-08 10:47:38.891787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:18.454 [2024-10-08 10:47:38.891998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:18.454 [2024-10-08 10:47:38.892040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:18.454 [2024-10-08 10:47:38.892062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:18.454 [2024-10-08 10:47:38.892084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:18.454 [2024-10-08 10:47:38.892104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:18.455 [2024-10-08 10:47:38.892127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:18.455 [2024-10-08 10:47:38.892147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:18.455 [2024-10-08 10:47:38.892167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:18.455 [2024-10-08 10:47:38.892187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:18.455 [2024-10-08 10:47:38.892207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:18.455 [2024-10-08 10:47:38.892226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:18.455 [2024-10-08 10:47:38.892246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:18.455 [2024-10-08 10:47:38.892331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:18.455 [2024-10-08 10:47:38.892357] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:18.455 [2024-10-08 10:47:38.892382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:18.455 [2024-10-08 10:47:38.892404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:18.455 [2024-10-08 10:47:38.892424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:18.455 [2024-10-08 10:47:38.892447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:18.455 [2024-10-08 10:47:38.892466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:18.455 [2024-10-08 10:47:38.892487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:18.455 [2024-10-08 10:47:38.892505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:18.455 [2024-10-08 10:47:38.892530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:18.455 [2024-10-08 10:47:38.892549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:18.455 [2024-10-08 10:47:38.892573] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:18.455 [2024-10-08 10:47:38.892655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:18.455 [2024-10-08 10:47:38.892696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:18.455 [2024-10-08 10:47:38.892725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:18.455 [2024-10-08 10:47:38.892756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:18.455 [2024-10-08 10:47:38.892785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:18.455 [2024-10-08 10:47:38.892834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:18.455 [2024-10-08 10:47:38.892863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:18.455 [2024-10-08 10:47:38.892894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:18.455 [2024-10-08 10:47:38.892924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:18.455 [2024-10-08 10:47:38.893007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:18.455 [2024-10-08 10:47:38.893038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:18.455 [2024-10-08 10:47:38.893082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:18.455 [2024-10-08 10:47:38.893111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:18.455 [2024-10-08 10:47:38.893145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:18.455 [2024-10-08 10:47:38.893176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:18.455 [2024-10-08 10:47:38.893207] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:18.455 [2024-10-08 10:47:38.893240] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:18.455 [2024-10-08 10:47:38.893328] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:18.455 [2024-10-08 10:47:38.893359] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:18.455 [2024-10-08 10:47:38.893393] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:18.455 [2024-10-08 10:47:38.893402] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:18.455 [2024-10-08 10:47:38.893414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.455 [2024-10-08 10:47:38.893423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:18.455 [2024-10-08 10:47:38.893434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.972 ms 00:16:18.455 [2024-10-08 10:47:38.893441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.455 [2024-10-08 10:47:38.907546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.455 [2024-10-08 10:47:38.907710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:18.455 [2024-10-08 10:47:38.907780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.006 ms 00:16:18.455 [2024-10-08 10:47:38.907827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.455 [2024-10-08 10:47:38.907980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.455 [2024-10-08 10:47:38.908009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:18.455 [2024-10-08 10:47:38.908034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:16:18.455 [2024-10-08 10:47:38.908058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.455 [2024-10-08 10:47:38.919921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.455 [2024-10-08 10:47:38.920067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:18.455 [2024-10-08 10:47:38.920086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.758 ms 00:16:18.455 [2024-10-08 10:47:38.920094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.455 [2024-10-08 10:47:38.920170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.455 [2024-10-08 10:47:38.920180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:18.455 [2024-10-08 10:47:38.920192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:18.455 [2024-10-08 10:47:38.920200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.455 [2024-10-08 10:47:38.920693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.455 [2024-10-08 10:47:38.920723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:18.455 [2024-10-08 10:47:38.920741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.467 ms 00:16:18.455 [2024-10-08 10:47:38.920750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.455 [2024-10-08 10:47:38.920925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.455 [2024-10-08 10:47:38.920945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:18.455 [2024-10-08 10:47:38.920959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:16:18.455 [2024-10-08 10:47:38.920968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.455 [2024-10-08 10:47:38.939869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.455 [2024-10-08 10:47:38.939941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:18.455 [2024-10-08 10:47:38.939968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.866 ms 00:16:18.455 [2024-10-08 10:47:38.939983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.455 [2024-10-08 10:47:38.944463] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:16:18.455 [2024-10-08 10:47:38.944532] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:18.455 [2024-10-08 10:47:38.944558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.455 [2024-10-08 10:47:38.944572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:18.455 [2024-10-08 10:47:38.944592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.350 ms 00:16:18.455 [2024-10-08 10:47:38.944604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.455 [2024-10-08 10:47:38.960492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.455 [2024-10-08 10:47:38.960542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:18.455 [2024-10-08 10:47:38.960561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.775 ms 00:16:18.455 [2024-10-08 10:47:38.960569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.455 [2024-10-08 10:47:38.963438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.455 [2024-10-08 10:47:38.963483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:18.455 [2024-10-08 10:47:38.963495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.777 ms 00:16:18.455 [2024-10-08 10:47:38.963503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.455 [2024-10-08 10:47:38.966143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.455 [2024-10-08 10:47:38.966185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:18.455 [2024-10-08 10:47:38.966198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.559 ms 00:16:18.455 [2024-10-08 10:47:38.966205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.455 [2024-10-08 10:47:38.966550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.455 [2024-10-08 10:47:38.966562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:18.455 [2024-10-08 10:47:38.966579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:16:18.455 [2024-10-08 10:47:38.966587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.455 [2024-10-08 10:47:38.986286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.455 [2024-10-08 10:47:38.986329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:18.456 [2024-10-08 10:47:38.986345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.674 ms 00:16:18.456 [2024-10-08 10:47:38.986354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.456 [2024-10-08 10:47:38.993718] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:18.456 [2024-10-08 10:47:39.007305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.456 [2024-10-08 10:47:39.007341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:18.456 [2024-10-08 10:47:39.007352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.878 ms 00:16:18.456 [2024-10-08 10:47:39.007362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.456 [2024-10-08 10:47:39.007426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.456 [2024-10-08 10:47:39.007438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:18.456 [2024-10-08 10:47:39.007448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:18.456 [2024-10-08 10:47:39.007461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.456 [2024-10-08 10:47:39.007507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.456 [2024-10-08 10:47:39.007524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:18.456 [2024-10-08 10:47:39.007532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:16:18.456 [2024-10-08 10:47:39.007540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.456 [2024-10-08 10:47:39.007563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.456 [2024-10-08 10:47:39.007577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:18.456 [2024-10-08 10:47:39.007585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:18.456 [2024-10-08 10:47:39.007595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.456 [2024-10-08 10:47:39.007625] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:18.456 [2024-10-08 10:47:39.007636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.456 [2024-10-08 10:47:39.007643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:18.456 [2024-10-08 10:47:39.007652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:18.456 [2024-10-08 10:47:39.007659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.456 [2024-10-08 10:47:39.011655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.456 [2024-10-08 10:47:39.011774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:18.456 [2024-10-08 10:47:39.011807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.972 ms 00:16:18.456 [2024-10-08 10:47:39.011818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.456 [2024-10-08 10:47:39.011906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.456 [2024-10-08 10:47:39.011916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:18.456 [2024-10-08 10:47:39.011930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:16:18.456 [2024-10-08 10:47:39.011937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.456 [2024-10-08 10:47:39.012693] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:18.456 [2024-10-08 10:47:39.013695] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 140.854 ms, result 0 00:16:18.456 [2024-10-08 10:47:39.015737] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:18.717 Some configs were skipped because the RPC state that can call them passed over. 00:16:18.717 10:47:39 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:18.717 [2024-10-08 10:47:39.243950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.717 [2024-10-08 10:47:39.244137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:18.717 [2024-10-08 10:47:39.244205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.130 ms 00:16:18.717 [2024-10-08 10:47:39.244235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.717 [2024-10-08 10:47:39.244295] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.495 ms, result 0 00:16:18.717 true 00:16:18.717 10:47:39 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:18.978 [2024-10-08 10:47:39.455374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.978 [2024-10-08 10:47:39.455541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:18.978 [2024-10-08 10:47:39.455610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.338 ms 00:16:18.978 [2024-10-08 10:47:39.455636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.978 [2024-10-08 10:47:39.455699] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.670 ms, result 0 00:16:18.978 true 00:16:18.978 10:47:39 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 86990 00:16:18.978 10:47:39 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 86990 ']' 00:16:18.978 10:47:39 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 86990 00:16:18.978 10:47:39 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:16:18.978 10:47:39 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:18.978 10:47:39 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86990 00:16:18.978 10:47:39 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:18.978 killing process with pid 86990 00:16:18.978 10:47:39 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:18.978 10:47:39 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86990' 00:16:18.978 10:47:39 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 86990 00:16:18.978 10:47:39 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 86990 00:16:19.241 [2024-10-08 10:47:39.632242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.241 [2024-10-08 10:47:39.632311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:19.241 [2024-10-08 10:47:39.632325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:19.241 [2024-10-08 10:47:39.632337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.241 [2024-10-08 10:47:39.632360] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:19.241 [2024-10-08 10:47:39.632997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.241 [2024-10-08 10:47:39.633024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:19.241 [2024-10-08 10:47:39.633042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.618 ms 00:16:19.241 [2024-10-08 10:47:39.633065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.241 [2024-10-08 10:47:39.633375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.241 [2024-10-08 10:47:39.633386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:19.241 [2024-10-08 10:47:39.633399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:16:19.241 [2024-10-08 10:47:39.633413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.241 [2024-10-08 10:47:39.637839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.241 [2024-10-08 10:47:39.637879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:19.241 [2024-10-08 10:47:39.637898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.402 ms 00:16:19.241 [2024-10-08 10:47:39.637906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.241 [2024-10-08 10:47:39.644915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.241 [2024-10-08 10:47:39.645075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:19.241 [2024-10-08 10:47:39.645101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.962 ms 00:16:19.241 [2024-10-08 10:47:39.645109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.241 [2024-10-08 10:47:39.647601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.241 [2024-10-08 10:47:39.647644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:19.241 [2024-10-08 10:47:39.647656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.409 ms 00:16:19.241 [2024-10-08 10:47:39.647663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.241 [2024-10-08 10:47:39.652879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.241 [2024-10-08 10:47:39.652920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:19.241 [2024-10-08 10:47:39.652933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.167 ms 00:16:19.241 [2024-10-08 10:47:39.652944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.241 [2024-10-08 10:47:39.653101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.241 [2024-10-08 10:47:39.653112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:19.241 [2024-10-08 10:47:39.653129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:16:19.241 [2024-10-08 10:47:39.653136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.241 [2024-10-08 10:47:39.655427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.241 [2024-10-08 10:47:39.655467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:19.241 [2024-10-08 10:47:39.655484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.250 ms 00:16:19.241 [2024-10-08 10:47:39.655491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.241 [2024-10-08 10:47:39.657489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.241 [2024-10-08 10:47:39.657532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:19.241 [2024-10-08 10:47:39.657544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.932 ms 00:16:19.241 [2024-10-08 10:47:39.657551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.241 [2024-10-08 10:47:39.659050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.241 [2024-10-08 10:47:39.659192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:19.241 [2024-10-08 10:47:39.659213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.449 ms 00:16:19.241 [2024-10-08 10:47:39.659221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.241 [2024-10-08 10:47:39.660834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.241 [2024-10-08 10:47:39.660871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:19.241 [2024-10-08 10:47:39.660882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.543 ms 00:16:19.242 [2024-10-08 10:47:39.660889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.242 [2024-10-08 10:47:39.660933] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:19.242 [2024-10-08 10:47:39.660948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.660963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.660970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.660980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.660988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.660997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:19.242 [2024-10-08 10:47:39.661632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:19.243 [2024-10-08 10:47:39.661639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:19.243 [2024-10-08 10:47:39.661648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:19.243 [2024-10-08 10:47:39.661656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:19.243 [2024-10-08 10:47:39.661665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:19.243 [2024-10-08 10:47:39.661672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:19.243 [2024-10-08 10:47:39.661685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:19.243 [2024-10-08 10:47:39.661692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:19.243 [2024-10-08 10:47:39.661701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:19.243 [2024-10-08 10:47:39.661709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:19.243 [2024-10-08 10:47:39.661718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:19.243 [2024-10-08 10:47:39.661726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:19.243 [2024-10-08 10:47:39.661736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:19.243 [2024-10-08 10:47:39.661744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:19.243 [2024-10-08 10:47:39.661753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:19.243 [2024-10-08 10:47:39.661760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:19.243 [2024-10-08 10:47:39.661771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:19.243 [2024-10-08 10:47:39.661786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:19.243 [2024-10-08 10:47:39.661812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:19.243 [2024-10-08 10:47:39.661820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:19.243 [2024-10-08 10:47:39.661829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:19.243 [2024-10-08 10:47:39.661837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:19.243 [2024-10-08 10:47:39.661848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:19.243 [2024-10-08 10:47:39.661856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:19.243 [2024-10-08 10:47:39.661866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:19.243 [2024-10-08 10:47:39.661881] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:19.243 [2024-10-08 10:47:39.661892] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9e839222-06df-480c-bbd4-867c33d1e348 00:16:19.243 [2024-10-08 10:47:39.661901] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:19.243 [2024-10-08 10:47:39.661913] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:19.243 [2024-10-08 10:47:39.661920] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:19.243 [2024-10-08 10:47:39.661931] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:19.243 [2024-10-08 10:47:39.661938] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:19.243 [2024-10-08 10:47:39.661951] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:19.243 [2024-10-08 10:47:39.661960] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:19.243 [2024-10-08 10:47:39.661968] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:19.243 [2024-10-08 10:47:39.661975] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:19.243 [2024-10-08 10:47:39.661984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.243 [2024-10-08 10:47:39.661992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:19.243 [2024-10-08 10:47:39.662006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.053 ms 00:16:19.243 [2024-10-08 10:47:39.662013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.243 [2024-10-08 10:47:39.664564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.243 [2024-10-08 10:47:39.664600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:19.243 [2024-10-08 10:47:39.664613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.514 ms 00:16:19.243 [2024-10-08 10:47:39.664621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.243 [2024-10-08 10:47:39.664749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:19.243 [2024-10-08 10:47:39.664759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:19.243 [2024-10-08 10:47:39.664770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:16:19.243 [2024-10-08 10:47:39.664778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.243 [2024-10-08 10:47:39.672704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.243 [2024-10-08 10:47:39.672867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:19.243 [2024-10-08 10:47:39.672930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.243 [2024-10-08 10:47:39.672960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.243 [2024-10-08 10:47:39.673087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.243 [2024-10-08 10:47:39.673119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:19.243 [2024-10-08 10:47:39.673146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.243 [2024-10-08 10:47:39.673167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.243 [2024-10-08 10:47:39.673237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.243 [2024-10-08 10:47:39.673310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:19.243 [2024-10-08 10:47:39.673334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.243 [2024-10-08 10:47:39.673364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.243 [2024-10-08 10:47:39.673401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.243 [2024-10-08 10:47:39.673432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:19.243 [2024-10-08 10:47:39.673462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.243 [2024-10-08 10:47:39.673585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.243 [2024-10-08 10:47:39.687423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.243 [2024-10-08 10:47:39.687606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:19.243 [2024-10-08 10:47:39.687672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.243 [2024-10-08 10:47:39.687697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.243 [2024-10-08 10:47:39.698679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.243 [2024-10-08 10:47:39.698876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:19.243 [2024-10-08 10:47:39.698942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.243 [2024-10-08 10:47:39.698969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.243 [2024-10-08 10:47:39.699056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.243 [2024-10-08 10:47:39.699087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:19.243 [2024-10-08 10:47:39.699112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.243 [2024-10-08 10:47:39.699132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.243 [2024-10-08 10:47:39.699182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.243 [2024-10-08 10:47:39.699205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:19.243 [2024-10-08 10:47:39.699229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.243 [2024-10-08 10:47:39.699301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.243 [2024-10-08 10:47:39.699416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.243 [2024-10-08 10:47:39.699443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:19.243 [2024-10-08 10:47:39.699469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.243 [2024-10-08 10:47:39.699489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.243 [2024-10-08 10:47:39.699543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.243 [2024-10-08 10:47:39.699567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:19.243 [2024-10-08 10:47:39.699592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.243 [2024-10-08 10:47:39.699615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.243 [2024-10-08 10:47:39.699679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.243 [2024-10-08 10:47:39.699764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:19.243 [2024-10-08 10:47:39.699812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.243 [2024-10-08 10:47:39.699835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.243 [2024-10-08 10:47:39.699907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:19.243 [2024-10-08 10:47:39.699935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:19.243 [2024-10-08 10:47:39.699960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:19.243 [2024-10-08 10:47:39.699982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:19.243 [2024-10-08 10:47:39.700153] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 67.874 ms, result 0 00:16:19.504 10:47:39 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:19.504 [2024-10-08 10:47:40.007658] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:16:19.504 [2024-10-08 10:47:40.008007] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87029 ] 00:16:19.766 [2024-10-08 10:47:40.138449] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:19.766 [2024-10-08 10:47:40.161326] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.766 [2024-10-08 10:47:40.211231] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.766 [2024-10-08 10:47:40.325189] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:19.766 [2024-10-08 10:47:40.325273] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:20.028 [2024-10-08 10:47:40.486915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.028 [2024-10-08 10:47:40.487166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:20.028 [2024-10-08 10:47:40.487206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:20.028 [2024-10-08 10:47:40.487221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.028 [2024-10-08 10:47:40.489968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.028 [2024-10-08 10:47:40.490018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:20.029 [2024-10-08 10:47:40.490033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.707 ms 00:16:20.029 [2024-10-08 10:47:40.490042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.029 [2024-10-08 10:47:40.490161] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:20.029 [2024-10-08 10:47:40.490443] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:20.029 [2024-10-08 10:47:40.490471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.029 [2024-10-08 10:47:40.490481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:20.029 [2024-10-08 10:47:40.490495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:16:20.029 [2024-10-08 10:47:40.490504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.029 [2024-10-08 10:47:40.492319] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:20.029 [2024-10-08 10:47:40.496307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.029 [2024-10-08 10:47:40.496361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:20.029 [2024-10-08 10:47:40.496376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.990 ms 00:16:20.029 [2024-10-08 10:47:40.496385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.029 [2024-10-08 10:47:40.496468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.029 [2024-10-08 10:47:40.496479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:20.029 [2024-10-08 10:47:40.496493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:16:20.029 [2024-10-08 10:47:40.496500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.029 [2024-10-08 10:47:40.504972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.029 [2024-10-08 10:47:40.505017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:20.029 [2024-10-08 10:47:40.505029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.428 ms 00:16:20.029 [2024-10-08 10:47:40.505037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.029 [2024-10-08 10:47:40.505178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.029 [2024-10-08 10:47:40.505191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:20.029 [2024-10-08 10:47:40.505201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:16:20.029 [2024-10-08 10:47:40.505210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.029 [2024-10-08 10:47:40.505238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.029 [2024-10-08 10:47:40.505251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:20.029 [2024-10-08 10:47:40.505260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:20.029 [2024-10-08 10:47:40.505268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.029 [2024-10-08 10:47:40.505289] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:20.029 [2024-10-08 10:47:40.507439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.029 [2024-10-08 10:47:40.507476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:20.029 [2024-10-08 10:47:40.507486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.155 ms 00:16:20.029 [2024-10-08 10:47:40.507495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.029 [2024-10-08 10:47:40.507538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.029 [2024-10-08 10:47:40.507555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:20.029 [2024-10-08 10:47:40.507564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:20.029 [2024-10-08 10:47:40.507572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.029 [2024-10-08 10:47:40.507596] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:20.029 [2024-10-08 10:47:40.507616] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:20.029 [2024-10-08 10:47:40.507655] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:20.029 [2024-10-08 10:47:40.507672] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:20.029 [2024-10-08 10:47:40.507784] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:20.029 [2024-10-08 10:47:40.507821] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:20.029 [2024-10-08 10:47:40.507838] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:20.029 [2024-10-08 10:47:40.507849] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:20.029 [2024-10-08 10:47:40.507864] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:20.029 [2024-10-08 10:47:40.507873] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:20.029 [2024-10-08 10:47:40.507881] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:20.029 [2024-10-08 10:47:40.507889] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:20.029 [2024-10-08 10:47:40.507897] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:20.029 [2024-10-08 10:47:40.507908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.029 [2024-10-08 10:47:40.507919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:20.029 [2024-10-08 10:47:40.507929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:16:20.029 [2024-10-08 10:47:40.507958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.029 [2024-10-08 10:47:40.508048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.029 [2024-10-08 10:47:40.508063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:20.029 [2024-10-08 10:47:40.508074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:20.029 [2024-10-08 10:47:40.508083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.029 [2024-10-08 10:47:40.508186] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:20.029 [2024-10-08 10:47:40.508200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:20.029 [2024-10-08 10:47:40.508214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:20.029 [2024-10-08 10:47:40.508230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:20.029 [2024-10-08 10:47:40.508239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:20.029 [2024-10-08 10:47:40.508248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:20.029 [2024-10-08 10:47:40.508266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:20.029 [2024-10-08 10:47:40.508278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:20.029 [2024-10-08 10:47:40.508287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:20.029 [2024-10-08 10:47:40.508296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:20.029 [2024-10-08 10:47:40.508304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:20.029 [2024-10-08 10:47:40.508312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:20.029 [2024-10-08 10:47:40.508322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:20.029 [2024-10-08 10:47:40.508333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:20.029 [2024-10-08 10:47:40.508343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:20.029 [2024-10-08 10:47:40.508351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:20.029 [2024-10-08 10:47:40.508359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:20.029 [2024-10-08 10:47:40.508367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:20.029 [2024-10-08 10:47:40.508375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:20.029 [2024-10-08 10:47:40.508386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:20.029 [2024-10-08 10:47:40.508394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:20.029 [2024-10-08 10:47:40.508402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:20.029 [2024-10-08 10:47:40.508410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:20.029 [2024-10-08 10:47:40.508422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:20.029 [2024-10-08 10:47:40.508430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:20.029 [2024-10-08 10:47:40.508440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:20.029 [2024-10-08 10:47:40.508449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:20.029 [2024-10-08 10:47:40.508457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:20.029 [2024-10-08 10:47:40.508464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:20.029 [2024-10-08 10:47:40.508472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:20.029 [2024-10-08 10:47:40.508480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:20.029 [2024-10-08 10:47:40.508488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:20.029 [2024-10-08 10:47:40.508497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:20.029 [2024-10-08 10:47:40.508505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:20.029 [2024-10-08 10:47:40.508512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:20.029 [2024-10-08 10:47:40.508519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:20.029 [2024-10-08 10:47:40.508525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:20.029 [2024-10-08 10:47:40.508532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:20.029 [2024-10-08 10:47:40.508540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:20.029 [2024-10-08 10:47:40.508549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:20.029 [2024-10-08 10:47:40.508557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:20.029 [2024-10-08 10:47:40.508564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:20.029 [2024-10-08 10:47:40.508571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:20.029 [2024-10-08 10:47:40.508577] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:20.029 [2024-10-08 10:47:40.508586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:20.029 [2024-10-08 10:47:40.508594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:20.029 [2024-10-08 10:47:40.508603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:20.029 [2024-10-08 10:47:40.508611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:20.029 [2024-10-08 10:47:40.508620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:20.029 [2024-10-08 10:47:40.508627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:20.029 [2024-10-08 10:47:40.508634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:20.030 [2024-10-08 10:47:40.508641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:20.030 [2024-10-08 10:47:40.508647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:20.030 [2024-10-08 10:47:40.508656] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:20.030 [2024-10-08 10:47:40.508665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:20.030 [2024-10-08 10:47:40.508678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:20.030 [2024-10-08 10:47:40.508686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:20.030 [2024-10-08 10:47:40.508693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:20.030 [2024-10-08 10:47:40.508700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:20.030 [2024-10-08 10:47:40.508707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:20.030 [2024-10-08 10:47:40.508714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:20.030 [2024-10-08 10:47:40.508723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:20.030 [2024-10-08 10:47:40.508731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:20.030 [2024-10-08 10:47:40.508738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:20.030 [2024-10-08 10:47:40.508745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:20.030 [2024-10-08 10:47:40.508752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:20.030 [2024-10-08 10:47:40.508760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:20.030 [2024-10-08 10:47:40.508767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:20.030 [2024-10-08 10:47:40.508775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:20.030 [2024-10-08 10:47:40.508783] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:20.030 [2024-10-08 10:47:40.508805] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:20.030 [2024-10-08 10:47:40.508816] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:20.030 [2024-10-08 10:47:40.508824] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:20.030 [2024-10-08 10:47:40.508832] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:20.030 [2024-10-08 10:47:40.508839] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:20.030 [2024-10-08 10:47:40.508846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.030 [2024-10-08 10:47:40.508857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:20.030 [2024-10-08 10:47:40.508868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.729 ms 00:16:20.030 [2024-10-08 10:47:40.508877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.030 [2024-10-08 10:47:40.531903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.030 [2024-10-08 10:47:40.531968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:20.030 [2024-10-08 10:47:40.531988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.970 ms 00:16:20.030 [2024-10-08 10:47:40.532001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.030 [2024-10-08 10:47:40.532173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.030 [2024-10-08 10:47:40.532186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:20.030 [2024-10-08 10:47:40.532202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:16:20.030 [2024-10-08 10:47:40.532212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.030 [2024-10-08 10:47:40.544289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.030 [2024-10-08 10:47:40.544336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:20.030 [2024-10-08 10:47:40.544348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.052 ms 00:16:20.030 [2024-10-08 10:47:40.544357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.030 [2024-10-08 10:47:40.544428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.030 [2024-10-08 10:47:40.544439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:20.030 [2024-10-08 10:47:40.544452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:20.030 [2024-10-08 10:47:40.544460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.030 [2024-10-08 10:47:40.544990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.030 [2024-10-08 10:47:40.545023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:20.030 [2024-10-08 10:47:40.545043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.510 ms 00:16:20.030 [2024-10-08 10:47:40.545069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.030 [2024-10-08 10:47:40.545225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.030 [2024-10-08 10:47:40.545244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:20.030 [2024-10-08 10:47:40.545257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:16:20.030 [2024-10-08 10:47:40.545270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.030 [2024-10-08 10:47:40.553045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.030 [2024-10-08 10:47:40.553103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:20.030 [2024-10-08 10:47:40.553114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.749 ms 00:16:20.030 [2024-10-08 10:47:40.553128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.030 [2024-10-08 10:47:40.557027] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:16:20.030 [2024-10-08 10:47:40.557092] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:20.030 [2024-10-08 10:47:40.557111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.030 [2024-10-08 10:47:40.557120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:20.030 [2024-10-08 10:47:40.557130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.879 ms 00:16:20.030 [2024-10-08 10:47:40.557137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.030 [2024-10-08 10:47:40.572755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.030 [2024-10-08 10:47:40.572823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:20.030 [2024-10-08 10:47:40.572836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.536 ms 00:16:20.030 [2024-10-08 10:47:40.572844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.030 [2024-10-08 10:47:40.575756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.030 [2024-10-08 10:47:40.575814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:20.030 [2024-10-08 10:47:40.575824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.816 ms 00:16:20.030 [2024-10-08 10:47:40.575833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.030 [2024-10-08 10:47:40.578346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.030 [2024-10-08 10:47:40.578535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:20.030 [2024-10-08 10:47:40.578554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.460 ms 00:16:20.030 [2024-10-08 10:47:40.578561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.030 [2024-10-08 10:47:40.579030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.030 [2024-10-08 10:47:40.579056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:20.030 [2024-10-08 10:47:40.579068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:16:20.030 [2024-10-08 10:47:40.579078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.292 [2024-10-08 10:47:40.602652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.292 [2024-10-08 10:47:40.602719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:20.292 [2024-10-08 10:47:40.602734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.546 ms 00:16:20.292 [2024-10-08 10:47:40.602743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.292 [2024-10-08 10:47:40.611021] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:20.292 [2024-10-08 10:47:40.630319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.292 [2024-10-08 10:47:40.630369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:20.292 [2024-10-08 10:47:40.630383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.451 ms 00:16:20.292 [2024-10-08 10:47:40.630392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.292 [2024-10-08 10:47:40.630488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.292 [2024-10-08 10:47:40.630505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:20.292 [2024-10-08 10:47:40.630519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:16:20.292 [2024-10-08 10:47:40.630529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.292 [2024-10-08 10:47:40.630589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.292 [2024-10-08 10:47:40.630601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:20.292 [2024-10-08 10:47:40.630610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:16:20.292 [2024-10-08 10:47:40.630619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.292 [2024-10-08 10:47:40.630645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.292 [2024-10-08 10:47:40.630654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:20.292 [2024-10-08 10:47:40.630663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:20.292 [2024-10-08 10:47:40.630670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.292 [2024-10-08 10:47:40.630706] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:20.292 [2024-10-08 10:47:40.630729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.292 [2024-10-08 10:47:40.630738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:20.292 [2024-10-08 10:47:40.630747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:16:20.292 [2024-10-08 10:47:40.630755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.292 [2024-10-08 10:47:40.636744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.292 [2024-10-08 10:47:40.636814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:20.292 [2024-10-08 10:47:40.636827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.969 ms 00:16:20.292 [2024-10-08 10:47:40.636835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.292 [2024-10-08 10:47:40.636933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.292 [2024-10-08 10:47:40.636947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:20.293 [2024-10-08 10:47:40.636959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:16:20.293 [2024-10-08 10:47:40.636968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.293 [2024-10-08 10:47:40.638041] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:20.293 [2024-10-08 10:47:40.639351] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 150.811 ms, result 0 00:16:20.293 [2024-10-08 10:47:40.640677] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:20.293 [2024-10-08 10:47:40.648034] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:21.239  [2024-10-08T10:47:42.758Z] Copying: 20/256 [MB] (20 MBps) [2024-10-08T10:47:44.146Z] Copying: 38/256 [MB] (18 MBps) [2024-10-08T10:47:44.719Z] Copying: 53/256 [MB] (14 MBps) [2024-10-08T10:47:46.107Z] Copying: 72/256 [MB] (18 MBps) [2024-10-08T10:47:47.050Z] Copying: 84/256 [MB] (12 MBps) [2024-10-08T10:47:47.993Z] Copying: 95/256 [MB] (10 MBps) [2024-10-08T10:47:48.985Z] Copying: 106/256 [MB] (11 MBps) [2024-10-08T10:47:49.929Z] Copying: 123/256 [MB] (16 MBps) [2024-10-08T10:47:50.872Z] Copying: 134/256 [MB] (10 MBps) [2024-10-08T10:47:51.816Z] Copying: 145/256 [MB] (11 MBps) [2024-10-08T10:47:52.761Z] Copying: 160/256 [MB] (14 MBps) [2024-10-08T10:47:54.146Z] Copying: 174/256 [MB] (14 MBps) [2024-10-08T10:47:54.719Z] Copying: 192/256 [MB] (17 MBps) [2024-10-08T10:47:56.106Z] Copying: 212/256 [MB] (19 MBps) [2024-10-08T10:47:57.051Z] Copying: 223/256 [MB] (11 MBps) [2024-10-08T10:47:57.995Z] Copying: 241/256 [MB] (17 MBps) [2024-10-08T10:47:57.995Z] Copying: 252/256 [MB] (11 MBps) [2024-10-08T10:47:58.257Z] Copying: 256/256 [MB] (average 14 MBps)[2024-10-08 10:47:58.230706] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:37.680 [2024-10-08 10:47:58.232673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.680 [2024-10-08 10:47:58.232729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:37.680 [2024-10-08 10:47:58.232750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:37.680 [2024-10-08 10:47:58.232760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.680 [2024-10-08 10:47:58.232786] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:37.680 [2024-10-08 10:47:58.233547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.680 [2024-10-08 10:47:58.233598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:37.680 [2024-10-08 10:47:58.233612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.729 ms 00:16:37.680 [2024-10-08 10:47:58.233634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.680 [2024-10-08 10:47:58.233948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.680 [2024-10-08 10:47:58.233972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:37.680 [2024-10-08 10:47:58.233985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:16:37.680 [2024-10-08 10:47:58.233995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.680 [2024-10-08 10:47:58.237700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.680 [2024-10-08 10:47:58.237727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:37.680 [2024-10-08 10:47:58.237737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.683 ms 00:16:37.680 [2024-10-08 10:47:58.237747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.680 [2024-10-08 10:47:58.244730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.680 [2024-10-08 10:47:58.245035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:37.680 [2024-10-08 10:47:58.245059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.962 ms 00:16:37.680 [2024-10-08 10:47:58.245087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.680 [2024-10-08 10:47:58.248162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.680 [2024-10-08 10:47:58.248208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:37.680 [2024-10-08 10:47:58.248219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.994 ms 00:16:37.680 [2024-10-08 10:47:58.248228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.680 [2024-10-08 10:47:58.254116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.680 [2024-10-08 10:47:58.254181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:37.680 [2024-10-08 10:47:58.254204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.506 ms 00:16:37.680 [2024-10-08 10:47:58.254220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.680 [2024-10-08 10:47:58.254378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.680 [2024-10-08 10:47:58.254392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:37.680 [2024-10-08 10:47:58.254402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:16:37.680 [2024-10-08 10:47:58.254411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.954 [2024-10-08 10:47:58.258110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.954 [2024-10-08 10:47:58.258161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:37.954 [2024-10-08 10:47:58.258172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.675 ms 00:16:37.954 [2024-10-08 10:47:58.258180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.954 [2024-10-08 10:47:58.261463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.954 [2024-10-08 10:47:58.261512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:37.954 [2024-10-08 10:47:58.261523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.234 ms 00:16:37.954 [2024-10-08 10:47:58.261532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.954 [2024-10-08 10:47:58.263898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.954 [2024-10-08 10:47:58.263944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:37.954 [2024-10-08 10:47:58.263954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.317 ms 00:16:37.954 [2024-10-08 10:47:58.263963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.954 [2024-10-08 10:47:58.266067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.954 [2024-10-08 10:47:58.266114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:37.954 [2024-10-08 10:47:58.266124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.025 ms 00:16:37.954 [2024-10-08 10:47:58.266132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.954 [2024-10-08 10:47:58.266177] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:37.954 [2024-10-08 10:47:58.266194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:37.954 [2024-10-08 10:47:58.266205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:37.954 [2024-10-08 10:47:58.266215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:37.954 [2024-10-08 10:47:58.266223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:37.954 [2024-10-08 10:47:58.266231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:37.955 [2024-10-08 10:47:58.266777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:37.956 [2024-10-08 10:47:58.266784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:37.956 [2024-10-08 10:47:58.266820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:37.956 [2024-10-08 10:47:58.266830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:37.956 [2024-10-08 10:47:58.266838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:37.956 [2024-10-08 10:47:58.266846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:37.956 [2024-10-08 10:47:58.266854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:37.956 [2024-10-08 10:47:58.266862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:37.956 [2024-10-08 10:47:58.266872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:37.956 [2024-10-08 10:47:58.266880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:37.956 [2024-10-08 10:47:58.266889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:37.956 [2024-10-08 10:47:58.266898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:37.956 [2024-10-08 10:47:58.266906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:37.956 [2024-10-08 10:47:58.266915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:37.956 [2024-10-08 10:47:58.266923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:37.956 [2024-10-08 10:47:58.266931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:37.956 [2024-10-08 10:47:58.266940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:37.956 [2024-10-08 10:47:58.266972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:37.956 [2024-10-08 10:47:58.266981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:37.956 [2024-10-08 10:47:58.266989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:37.956 [2024-10-08 10:47:58.266996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:37.956 [2024-10-08 10:47:58.267006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:37.956 [2024-10-08 10:47:58.267014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:37.956 [2024-10-08 10:47:58.267023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:37.956 [2024-10-08 10:47:58.267031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:37.956 [2024-10-08 10:47:58.267039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:37.956 [2024-10-08 10:47:58.267047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:37.956 [2024-10-08 10:47:58.267057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:37.956 [2024-10-08 10:47:58.267065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:37.956 [2024-10-08 10:47:58.267082] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:37.956 [2024-10-08 10:47:58.267091] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9e839222-06df-480c-bbd4-867c33d1e348 00:16:37.956 [2024-10-08 10:47:58.267099] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:37.956 [2024-10-08 10:47:58.267108] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:37.956 [2024-10-08 10:47:58.267117] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:37.956 [2024-10-08 10:47:58.267138] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:37.956 [2024-10-08 10:47:58.267156] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:37.956 [2024-10-08 10:47:58.267176] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:37.956 [2024-10-08 10:47:58.267184] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:37.956 [2024-10-08 10:47:58.267191] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:37.956 [2024-10-08 10:47:58.267198] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:37.956 [2024-10-08 10:47:58.267208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.956 [2024-10-08 10:47:58.267217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:37.956 [2024-10-08 10:47:58.267231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.033 ms 00:16:37.956 [2024-10-08 10:47:58.267243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.956 [2024-10-08 10:47:58.269741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.956 [2024-10-08 10:47:58.269776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:37.956 [2024-10-08 10:47:58.269786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.476 ms 00:16:37.956 [2024-10-08 10:47:58.269816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.956 [2024-10-08 10:47:58.269950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.956 [2024-10-08 10:47:58.269961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:37.956 [2024-10-08 10:47:58.269971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:16:37.956 [2024-10-08 10:47:58.269980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.956 [2024-10-08 10:47:58.277984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.956 [2024-10-08 10:47:58.278171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:37.956 [2024-10-08 10:47:58.278230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.956 [2024-10-08 10:47:58.278263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.956 [2024-10-08 10:47:58.278374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.956 [2024-10-08 10:47:58.278402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:37.956 [2024-10-08 10:47:58.278424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.956 [2024-10-08 10:47:58.278443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.956 [2024-10-08 10:47:58.278545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.956 [2024-10-08 10:47:58.278576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:37.956 [2024-10-08 10:47:58.278600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.956 [2024-10-08 10:47:58.278622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.956 [2024-10-08 10:47:58.278655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.956 [2024-10-08 10:47:58.278683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:37.956 [2024-10-08 10:47:58.278705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.956 [2024-10-08 10:47:58.278764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.956 [2024-10-08 10:47:58.295375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.956 [2024-10-08 10:47:58.295582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:37.956 [2024-10-08 10:47:58.295653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.956 [2024-10-08 10:47:58.295678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.956 [2024-10-08 10:47:58.307886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.956 [2024-10-08 10:47:58.308085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:37.956 [2024-10-08 10:47:58.308146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.956 [2024-10-08 10:47:58.308170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.956 [2024-10-08 10:47:58.308238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.956 [2024-10-08 10:47:58.308264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:37.956 [2024-10-08 10:47:58.308285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.956 [2024-10-08 10:47:58.308308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.956 [2024-10-08 10:47:58.308350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.956 [2024-10-08 10:47:58.308372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:37.956 [2024-10-08 10:47:58.308394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.956 [2024-10-08 10:47:58.308469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.956 [2024-10-08 10:47:58.308565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.956 [2024-10-08 10:47:58.308585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:37.956 [2024-10-08 10:47:58.308595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.956 [2024-10-08 10:47:58.308603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.956 [2024-10-08 10:47:58.308639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.957 [2024-10-08 10:47:58.308652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:37.957 [2024-10-08 10:47:58.308662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.957 [2024-10-08 10:47:58.308674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.957 [2024-10-08 10:47:58.308722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.957 [2024-10-08 10:47:58.308733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:37.957 [2024-10-08 10:47:58.308741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.957 [2024-10-08 10:47:58.308755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.957 [2024-10-08 10:47:58.308836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:37.957 [2024-10-08 10:47:58.308853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:37.957 [2024-10-08 10:47:58.308863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:37.957 [2024-10-08 10:47:58.308875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.957 [2024-10-08 10:47:58.309038] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 76.348 ms, result 0 00:16:38.220 00:16:38.220 00:16:38.220 10:47:58 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:16:38.481 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:16:38.481 10:47:59 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:16:38.481 10:47:59 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:16:38.481 10:47:59 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:16:38.481 10:47:59 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:38.481 10:47:59 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:16:38.481 10:47:59 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:38.743 Process with pid 86990 is not found 00:16:38.743 10:47:59 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 86990 00:16:38.743 10:47:59 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 86990 ']' 00:16:38.743 10:47:59 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 86990 00:16:38.743 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (86990) - No such process 00:16:38.743 10:47:59 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 86990 is not found' 00:16:38.743 ************************************ 00:16:38.743 END TEST ftl_trim 00:16:38.743 ************************************ 00:16:38.743 00:16:38.743 real 1m11.983s 00:16:38.743 user 1m34.319s 00:16:38.743 sys 0m5.619s 00:16:38.743 10:47:59 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:38.743 10:47:59 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:38.743 10:47:59 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:16:38.743 10:47:59 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:38.743 10:47:59 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:38.743 10:47:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:38.743 ************************************ 00:16:38.743 START TEST ftl_restore 00:16:38.743 ************************************ 00:16:38.743 10:47:59 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:16:38.743 * Looking for test storage... 00:16:38.743 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:38.743 10:47:59 ftl.ftl_restore -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:38.743 10:47:59 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lcov --version 00:16:38.743 10:47:59 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:38.743 10:47:59 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:38.743 10:47:59 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:38.743 10:47:59 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:38.743 10:47:59 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:38.743 10:47:59 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:16:38.743 10:47:59 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:16:38.743 10:47:59 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:16:38.743 10:47:59 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:16:38.743 10:47:59 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:16:38.743 10:47:59 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:16:38.743 10:47:59 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:16:38.743 10:47:59 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:38.743 10:47:59 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:16:38.743 10:47:59 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:16:38.743 10:47:59 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:38.743 10:47:59 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:38.743 10:47:59 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:16:38.743 10:47:59 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:16:38.743 10:47:59 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:38.743 10:47:59 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:16:38.744 10:47:59 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:16:38.744 10:47:59 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:16:38.744 10:47:59 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:16:38.744 10:47:59 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:38.744 10:47:59 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:16:38.744 10:47:59 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:16:38.744 10:47:59 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:38.744 10:47:59 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:38.744 10:47:59 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:16:38.744 10:47:59 ftl.ftl_restore -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:38.744 10:47:59 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:38.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.744 --rc genhtml_branch_coverage=1 00:16:38.744 --rc genhtml_function_coverage=1 00:16:38.744 --rc genhtml_legend=1 00:16:38.744 --rc geninfo_all_blocks=1 00:16:38.744 --rc geninfo_unexecuted_blocks=1 00:16:38.744 00:16:38.744 ' 00:16:38.744 10:47:59 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:38.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.744 --rc genhtml_branch_coverage=1 00:16:38.744 --rc genhtml_function_coverage=1 00:16:38.744 --rc genhtml_legend=1 00:16:38.744 --rc geninfo_all_blocks=1 00:16:38.744 --rc geninfo_unexecuted_blocks=1 00:16:38.744 00:16:38.744 ' 00:16:38.744 10:47:59 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:38.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.744 --rc genhtml_branch_coverage=1 00:16:38.744 --rc genhtml_function_coverage=1 00:16:38.744 --rc genhtml_legend=1 00:16:38.744 --rc geninfo_all_blocks=1 00:16:38.744 --rc geninfo_unexecuted_blocks=1 00:16:38.744 00:16:38.744 ' 00:16:38.744 10:47:59 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:38.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.744 --rc genhtml_branch_coverage=1 00:16:38.744 --rc genhtml_function_coverage=1 00:16:38.744 --rc genhtml_legend=1 00:16:38.744 --rc geninfo_all_blocks=1 00:16:38.744 --rc geninfo_unexecuted_blocks=1 00:16:38.744 00:16:38.744 ' 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.PJJekWHcVI 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=87300 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 87300 00:16:38.744 10:47:59 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 87300 ']' 00:16:38.744 10:47:59 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:38.744 10:47:59 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.744 10:47:59 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:38.744 10:47:59 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.744 10:47:59 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:38.744 10:47:59 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:16:39.005 [2024-10-08 10:47:59.387968] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:16:39.005 [2024-10-08 10:47:59.388294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87300 ] 00:16:39.005 [2024-10-08 10:47:59.519968] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:39.005 [2024-10-08 10:47:59.533062] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.266 [2024-10-08 10:47:59.587076] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.838 10:48:00 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:39.838 10:48:00 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:16:39.838 10:48:00 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:39.838 10:48:00 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:16:39.838 10:48:00 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:39.838 10:48:00 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:16:39.838 10:48:00 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:16:39.838 10:48:00 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:40.100 10:48:00 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:40.100 10:48:00 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:16:40.100 10:48:00 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:40.100 10:48:00 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:40.100 10:48:00 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:40.100 10:48:00 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:16:40.100 10:48:00 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:16:40.100 10:48:00 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:40.361 10:48:00 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:40.361 { 00:16:40.361 "name": "nvme0n1", 00:16:40.361 "aliases": [ 00:16:40.361 "d417c5b4-dbd6-4b07-b1be-701bbe609374" 00:16:40.361 ], 00:16:40.361 "product_name": "NVMe disk", 00:16:40.361 "block_size": 4096, 00:16:40.361 "num_blocks": 1310720, 00:16:40.361 "uuid": "d417c5b4-dbd6-4b07-b1be-701bbe609374", 00:16:40.361 "numa_id": -1, 00:16:40.361 "assigned_rate_limits": { 00:16:40.361 "rw_ios_per_sec": 0, 00:16:40.361 "rw_mbytes_per_sec": 0, 00:16:40.361 "r_mbytes_per_sec": 0, 00:16:40.361 "w_mbytes_per_sec": 0 00:16:40.361 }, 00:16:40.361 "claimed": true, 00:16:40.361 "claim_type": "read_many_write_one", 00:16:40.361 "zoned": false, 00:16:40.361 "supported_io_types": { 00:16:40.361 "read": true, 00:16:40.361 "write": true, 00:16:40.361 "unmap": true, 00:16:40.361 "flush": true, 00:16:40.361 "reset": true, 00:16:40.361 "nvme_admin": true, 00:16:40.361 "nvme_io": true, 00:16:40.361 "nvme_io_md": false, 00:16:40.361 "write_zeroes": true, 00:16:40.361 "zcopy": false, 00:16:40.361 "get_zone_info": false, 00:16:40.361 "zone_management": false, 00:16:40.361 "zone_append": false, 00:16:40.361 "compare": true, 00:16:40.361 "compare_and_write": false, 00:16:40.361 "abort": true, 00:16:40.361 "seek_hole": false, 00:16:40.361 "seek_data": false, 00:16:40.361 "copy": true, 00:16:40.361 "nvme_iov_md": false 00:16:40.361 }, 00:16:40.361 "driver_specific": { 00:16:40.361 "nvme": [ 00:16:40.361 { 00:16:40.361 "pci_address": "0000:00:11.0", 00:16:40.361 "trid": { 00:16:40.361 "trtype": "PCIe", 00:16:40.361 "traddr": "0000:00:11.0" 00:16:40.361 }, 00:16:40.361 "ctrlr_data": { 00:16:40.361 "cntlid": 0, 00:16:40.361 "vendor_id": "0x1b36", 00:16:40.361 "model_number": "QEMU NVMe Ctrl", 00:16:40.361 "serial_number": "12341", 00:16:40.361 "firmware_revision": "8.0.0", 00:16:40.361 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:40.361 "oacs": { 00:16:40.361 "security": 0, 00:16:40.361 "format": 1, 00:16:40.361 "firmware": 0, 00:16:40.361 "ns_manage": 1 00:16:40.361 }, 00:16:40.361 "multi_ctrlr": false, 00:16:40.361 "ana_reporting": false 00:16:40.361 }, 00:16:40.361 "vs": { 00:16:40.361 "nvme_version": "1.4" 00:16:40.361 }, 00:16:40.361 "ns_data": { 00:16:40.361 "id": 1, 00:16:40.361 "can_share": false 00:16:40.361 } 00:16:40.361 } 00:16:40.361 ], 00:16:40.361 "mp_policy": "active_passive" 00:16:40.361 } 00:16:40.361 } 00:16:40.361 ]' 00:16:40.361 10:48:00 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:40.361 10:48:00 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:16:40.361 10:48:00 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:40.361 10:48:00 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:40.361 10:48:00 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:40.361 10:48:00 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:16:40.361 10:48:00 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:16:40.361 10:48:00 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:40.361 10:48:00 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:16:40.361 10:48:00 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:40.361 10:48:00 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:40.623 10:48:01 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=6c6dbcfc-4039-4795-a142-8a99520ed99c 00:16:40.623 10:48:01 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:16:40.623 10:48:01 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6c6dbcfc-4039-4795-a142-8a99520ed99c 00:16:40.884 10:48:01 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:41.146 10:48:01 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=97825d88-c263-4c91-9609-5952f05ac1a2 00:16:41.146 10:48:01 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 97825d88-c263-4c91-9609-5952f05ac1a2 00:16:41.146 10:48:01 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=bcd95d8a-30a8-4b43-895a-32b55ea16a68 00:16:41.146 10:48:01 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:16:41.146 10:48:01 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 bcd95d8a-30a8-4b43-895a-32b55ea16a68 00:16:41.146 10:48:01 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:16:41.146 10:48:01 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:41.146 10:48:01 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=bcd95d8a-30a8-4b43-895a-32b55ea16a68 00:16:41.146 10:48:01 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:16:41.146 10:48:01 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size bcd95d8a-30a8-4b43-895a-32b55ea16a68 00:16:41.146 10:48:01 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=bcd95d8a-30a8-4b43-895a-32b55ea16a68 00:16:41.146 10:48:01 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:41.146 10:48:01 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:16:41.146 10:48:01 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:16:41.146 10:48:01 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bcd95d8a-30a8-4b43-895a-32b55ea16a68 00:16:41.406 10:48:01 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:41.406 { 00:16:41.406 "name": "bcd95d8a-30a8-4b43-895a-32b55ea16a68", 00:16:41.406 "aliases": [ 00:16:41.406 "lvs/nvme0n1p0" 00:16:41.406 ], 00:16:41.406 "product_name": "Logical Volume", 00:16:41.406 "block_size": 4096, 00:16:41.406 "num_blocks": 26476544, 00:16:41.406 "uuid": "bcd95d8a-30a8-4b43-895a-32b55ea16a68", 00:16:41.406 "assigned_rate_limits": { 00:16:41.406 "rw_ios_per_sec": 0, 00:16:41.406 "rw_mbytes_per_sec": 0, 00:16:41.406 "r_mbytes_per_sec": 0, 00:16:41.406 "w_mbytes_per_sec": 0 00:16:41.406 }, 00:16:41.406 "claimed": false, 00:16:41.406 "zoned": false, 00:16:41.406 "supported_io_types": { 00:16:41.406 "read": true, 00:16:41.406 "write": true, 00:16:41.406 "unmap": true, 00:16:41.406 "flush": false, 00:16:41.406 "reset": true, 00:16:41.406 "nvme_admin": false, 00:16:41.406 "nvme_io": false, 00:16:41.406 "nvme_io_md": false, 00:16:41.406 "write_zeroes": true, 00:16:41.406 "zcopy": false, 00:16:41.406 "get_zone_info": false, 00:16:41.406 "zone_management": false, 00:16:41.406 "zone_append": false, 00:16:41.406 "compare": false, 00:16:41.406 "compare_and_write": false, 00:16:41.406 "abort": false, 00:16:41.406 "seek_hole": true, 00:16:41.406 "seek_data": true, 00:16:41.406 "copy": false, 00:16:41.406 "nvme_iov_md": false 00:16:41.406 }, 00:16:41.406 "driver_specific": { 00:16:41.406 "lvol": { 00:16:41.406 "lvol_store_uuid": "97825d88-c263-4c91-9609-5952f05ac1a2", 00:16:41.406 "base_bdev": "nvme0n1", 00:16:41.406 "thin_provision": true, 00:16:41.406 "num_allocated_clusters": 0, 00:16:41.406 "snapshot": false, 00:16:41.406 "clone": false, 00:16:41.406 "esnap_clone": false 00:16:41.406 } 00:16:41.406 } 00:16:41.406 } 00:16:41.406 ]' 00:16:41.406 10:48:01 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:41.406 10:48:01 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:16:41.406 10:48:01 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:41.406 10:48:01 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:41.406 10:48:01 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:41.406 10:48:01 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:16:41.406 10:48:01 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:16:41.406 10:48:01 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:16:41.406 10:48:01 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:41.667 10:48:02 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:41.667 10:48:02 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:41.667 10:48:02 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size bcd95d8a-30a8-4b43-895a-32b55ea16a68 00:16:41.667 10:48:02 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=bcd95d8a-30a8-4b43-895a-32b55ea16a68 00:16:41.667 10:48:02 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:41.667 10:48:02 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:16:41.667 10:48:02 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:16:41.667 10:48:02 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bcd95d8a-30a8-4b43-895a-32b55ea16a68 00:16:41.929 10:48:02 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:41.929 { 00:16:41.929 "name": "bcd95d8a-30a8-4b43-895a-32b55ea16a68", 00:16:41.929 "aliases": [ 00:16:41.929 "lvs/nvme0n1p0" 00:16:41.929 ], 00:16:41.929 "product_name": "Logical Volume", 00:16:41.929 "block_size": 4096, 00:16:41.929 "num_blocks": 26476544, 00:16:41.929 "uuid": "bcd95d8a-30a8-4b43-895a-32b55ea16a68", 00:16:41.929 "assigned_rate_limits": { 00:16:41.929 "rw_ios_per_sec": 0, 00:16:41.929 "rw_mbytes_per_sec": 0, 00:16:41.929 "r_mbytes_per_sec": 0, 00:16:41.929 "w_mbytes_per_sec": 0 00:16:41.929 }, 00:16:41.929 "claimed": false, 00:16:41.929 "zoned": false, 00:16:41.929 "supported_io_types": { 00:16:41.929 "read": true, 00:16:41.929 "write": true, 00:16:41.929 "unmap": true, 00:16:41.929 "flush": false, 00:16:41.929 "reset": true, 00:16:41.929 "nvme_admin": false, 00:16:41.929 "nvme_io": false, 00:16:41.929 "nvme_io_md": false, 00:16:41.929 "write_zeroes": true, 00:16:41.929 "zcopy": false, 00:16:41.929 "get_zone_info": false, 00:16:41.929 "zone_management": false, 00:16:41.929 "zone_append": false, 00:16:41.929 "compare": false, 00:16:41.929 "compare_and_write": false, 00:16:41.929 "abort": false, 00:16:41.929 "seek_hole": true, 00:16:41.929 "seek_data": true, 00:16:41.929 "copy": false, 00:16:41.929 "nvme_iov_md": false 00:16:41.929 }, 00:16:41.929 "driver_specific": { 00:16:41.929 "lvol": { 00:16:41.929 "lvol_store_uuid": "97825d88-c263-4c91-9609-5952f05ac1a2", 00:16:41.929 "base_bdev": "nvme0n1", 00:16:41.929 "thin_provision": true, 00:16:41.929 "num_allocated_clusters": 0, 00:16:41.929 "snapshot": false, 00:16:41.929 "clone": false, 00:16:41.929 "esnap_clone": false 00:16:41.929 } 00:16:41.929 } 00:16:41.929 } 00:16:41.929 ]' 00:16:41.929 10:48:02 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:41.929 10:48:02 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:16:41.929 10:48:02 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:41.929 10:48:02 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:41.929 10:48:02 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:41.929 10:48:02 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:16:42.188 10:48:02 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:16:42.188 10:48:02 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:42.188 10:48:02 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:16:42.188 10:48:02 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size bcd95d8a-30a8-4b43-895a-32b55ea16a68 00:16:42.188 10:48:02 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=bcd95d8a-30a8-4b43-895a-32b55ea16a68 00:16:42.188 10:48:02 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:42.188 10:48:02 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:16:42.188 10:48:02 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:16:42.188 10:48:02 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bcd95d8a-30a8-4b43-895a-32b55ea16a68 00:16:42.447 10:48:02 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:42.447 { 00:16:42.447 "name": "bcd95d8a-30a8-4b43-895a-32b55ea16a68", 00:16:42.447 "aliases": [ 00:16:42.447 "lvs/nvme0n1p0" 00:16:42.447 ], 00:16:42.447 "product_name": "Logical Volume", 00:16:42.447 "block_size": 4096, 00:16:42.447 "num_blocks": 26476544, 00:16:42.447 "uuid": "bcd95d8a-30a8-4b43-895a-32b55ea16a68", 00:16:42.447 "assigned_rate_limits": { 00:16:42.447 "rw_ios_per_sec": 0, 00:16:42.447 "rw_mbytes_per_sec": 0, 00:16:42.447 "r_mbytes_per_sec": 0, 00:16:42.447 "w_mbytes_per_sec": 0 00:16:42.447 }, 00:16:42.447 "claimed": false, 00:16:42.447 "zoned": false, 00:16:42.447 "supported_io_types": { 00:16:42.447 "read": true, 00:16:42.447 "write": true, 00:16:42.447 "unmap": true, 00:16:42.447 "flush": false, 00:16:42.447 "reset": true, 00:16:42.447 "nvme_admin": false, 00:16:42.447 "nvme_io": false, 00:16:42.447 "nvme_io_md": false, 00:16:42.447 "write_zeroes": true, 00:16:42.447 "zcopy": false, 00:16:42.447 "get_zone_info": false, 00:16:42.447 "zone_management": false, 00:16:42.447 "zone_append": false, 00:16:42.447 "compare": false, 00:16:42.447 "compare_and_write": false, 00:16:42.447 "abort": false, 00:16:42.447 "seek_hole": true, 00:16:42.447 "seek_data": true, 00:16:42.447 "copy": false, 00:16:42.447 "nvme_iov_md": false 00:16:42.447 }, 00:16:42.447 "driver_specific": { 00:16:42.447 "lvol": { 00:16:42.447 "lvol_store_uuid": "97825d88-c263-4c91-9609-5952f05ac1a2", 00:16:42.447 "base_bdev": "nvme0n1", 00:16:42.447 "thin_provision": true, 00:16:42.447 "num_allocated_clusters": 0, 00:16:42.447 "snapshot": false, 00:16:42.447 "clone": false, 00:16:42.447 "esnap_clone": false 00:16:42.447 } 00:16:42.447 } 00:16:42.447 } 00:16:42.447 ]' 00:16:42.447 10:48:02 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:42.447 10:48:02 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:16:42.447 10:48:02 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:42.447 10:48:02 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:42.447 10:48:02 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:42.447 10:48:02 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:16:42.447 10:48:02 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:16:42.447 10:48:02 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d bcd95d8a-30a8-4b43-895a-32b55ea16a68 --l2p_dram_limit 10' 00:16:42.447 10:48:02 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:16:42.447 10:48:02 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:16:42.447 10:48:02 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:16:42.447 10:48:02 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:16:42.447 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:16:42.447 10:48:02 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d bcd95d8a-30a8-4b43-895a-32b55ea16a68 --l2p_dram_limit 10 -c nvc0n1p0 00:16:42.706 [2024-10-08 10:48:03.160347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.706 [2024-10-08 10:48:03.160386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:42.706 [2024-10-08 10:48:03.160401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:42.706 [2024-10-08 10:48:03.160407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.706 [2024-10-08 10:48:03.160448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.706 [2024-10-08 10:48:03.160455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:42.706 [2024-10-08 10:48:03.160466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:16:42.706 [2024-10-08 10:48:03.160475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.706 [2024-10-08 10:48:03.160495] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:42.706 [2024-10-08 10:48:03.161012] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:42.706 [2024-10-08 10:48:03.161038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.706 [2024-10-08 10:48:03.161047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:42.706 [2024-10-08 10:48:03.161056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:16:42.706 [2024-10-08 10:48:03.161062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.706 [2024-10-08 10:48:03.161137] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 50a8d519-5012-4166-b574-4fc22bad979e 00:16:42.706 [2024-10-08 10:48:03.162068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.706 [2024-10-08 10:48:03.162093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:42.706 [2024-10-08 10:48:03.162101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:16:42.706 [2024-10-08 10:48:03.162112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.706 [2024-10-08 10:48:03.166819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.706 [2024-10-08 10:48:03.166851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:42.706 [2024-10-08 10:48:03.166858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.674 ms 00:16:42.706 [2024-10-08 10:48:03.166867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.706 [2024-10-08 10:48:03.166933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.706 [2024-10-08 10:48:03.166942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:42.706 [2024-10-08 10:48:03.166954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:42.706 [2024-10-08 10:48:03.166961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.706 [2024-10-08 10:48:03.167001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.706 [2024-10-08 10:48:03.167011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:42.706 [2024-10-08 10:48:03.167018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:42.706 [2024-10-08 10:48:03.167025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.706 [2024-10-08 10:48:03.167042] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:42.706 [2024-10-08 10:48:03.168288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.706 [2024-10-08 10:48:03.168313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:42.707 [2024-10-08 10:48:03.168323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.248 ms 00:16:42.707 [2024-10-08 10:48:03.168332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.707 [2024-10-08 10:48:03.168358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.707 [2024-10-08 10:48:03.168365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:42.707 [2024-10-08 10:48:03.168374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:42.707 [2024-10-08 10:48:03.168380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.707 [2024-10-08 10:48:03.168401] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:42.707 [2024-10-08 10:48:03.168505] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:42.707 [2024-10-08 10:48:03.168516] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:42.707 [2024-10-08 10:48:03.168524] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:42.707 [2024-10-08 10:48:03.168532] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:42.707 [2024-10-08 10:48:03.168539] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:42.707 [2024-10-08 10:48:03.168553] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:42.707 [2024-10-08 10:48:03.168560] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:42.707 [2024-10-08 10:48:03.168567] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:42.707 [2024-10-08 10:48:03.168574] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:42.707 [2024-10-08 10:48:03.168581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.707 [2024-10-08 10:48:03.168588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:42.707 [2024-10-08 10:48:03.168595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:16:42.707 [2024-10-08 10:48:03.168601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.707 [2024-10-08 10:48:03.168667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.707 [2024-10-08 10:48:03.168673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:42.707 [2024-10-08 10:48:03.168680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:42.707 [2024-10-08 10:48:03.168686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.707 [2024-10-08 10:48:03.168759] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:42.707 [2024-10-08 10:48:03.168767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:42.707 [2024-10-08 10:48:03.168774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:42.707 [2024-10-08 10:48:03.168780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.707 [2024-10-08 10:48:03.168788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:42.707 [2024-10-08 10:48:03.168803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:42.707 [2024-10-08 10:48:03.168810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:42.707 [2024-10-08 10:48:03.168815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:42.707 [2024-10-08 10:48:03.168822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:42.707 [2024-10-08 10:48:03.168827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:42.707 [2024-10-08 10:48:03.168835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:42.707 [2024-10-08 10:48:03.168840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:42.707 [2024-10-08 10:48:03.168849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:42.707 [2024-10-08 10:48:03.168855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:42.707 [2024-10-08 10:48:03.168861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:42.707 [2024-10-08 10:48:03.168866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.707 [2024-10-08 10:48:03.168873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:42.707 [2024-10-08 10:48:03.168878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:42.707 [2024-10-08 10:48:03.168885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.707 [2024-10-08 10:48:03.168890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:42.707 [2024-10-08 10:48:03.168896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:42.707 [2024-10-08 10:48:03.168901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:42.707 [2024-10-08 10:48:03.168908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:42.707 [2024-10-08 10:48:03.168913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:42.707 [2024-10-08 10:48:03.168919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:42.707 [2024-10-08 10:48:03.168924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:42.707 [2024-10-08 10:48:03.168931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:42.707 [2024-10-08 10:48:03.168936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:42.707 [2024-10-08 10:48:03.168945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:42.707 [2024-10-08 10:48:03.168950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:42.707 [2024-10-08 10:48:03.168957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:42.707 [2024-10-08 10:48:03.168963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:42.707 [2024-10-08 10:48:03.168970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:42.707 [2024-10-08 10:48:03.168976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:42.707 [2024-10-08 10:48:03.168982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:42.707 [2024-10-08 10:48:03.168988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:42.707 [2024-10-08 10:48:03.168996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:42.707 [2024-10-08 10:48:03.169002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:42.707 [2024-10-08 10:48:03.169009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:42.707 [2024-10-08 10:48:03.169015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.707 [2024-10-08 10:48:03.169022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:42.707 [2024-10-08 10:48:03.169027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:42.707 [2024-10-08 10:48:03.169034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.707 [2024-10-08 10:48:03.169040] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:42.707 [2024-10-08 10:48:03.169051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:42.707 [2024-10-08 10:48:03.169057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:42.707 [2024-10-08 10:48:03.169069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.707 [2024-10-08 10:48:03.169076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:42.707 [2024-10-08 10:48:03.169083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:42.707 [2024-10-08 10:48:03.169106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:42.707 [2024-10-08 10:48:03.169114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:42.707 [2024-10-08 10:48:03.169120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:42.707 [2024-10-08 10:48:03.169128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:42.707 [2024-10-08 10:48:03.169136] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:42.707 [2024-10-08 10:48:03.169145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:42.707 [2024-10-08 10:48:03.169153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:42.707 [2024-10-08 10:48:03.169160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:42.707 [2024-10-08 10:48:03.169167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:42.707 [2024-10-08 10:48:03.169174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:42.707 [2024-10-08 10:48:03.169180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:42.707 [2024-10-08 10:48:03.169191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:42.707 [2024-10-08 10:48:03.169197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:42.707 [2024-10-08 10:48:03.169205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:42.707 [2024-10-08 10:48:03.169211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:42.707 [2024-10-08 10:48:03.169218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:42.707 [2024-10-08 10:48:03.169224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:42.707 [2024-10-08 10:48:03.169231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:42.707 [2024-10-08 10:48:03.169237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:42.707 [2024-10-08 10:48:03.169245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:42.707 [2024-10-08 10:48:03.169251] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:42.707 [2024-10-08 10:48:03.169261] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:42.707 [2024-10-08 10:48:03.169268] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:42.707 [2024-10-08 10:48:03.169276] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:42.707 [2024-10-08 10:48:03.169282] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:42.707 [2024-10-08 10:48:03.169290] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:42.707 [2024-10-08 10:48:03.169296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.707 [2024-10-08 10:48:03.169308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:42.707 [2024-10-08 10:48:03.169314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:16:42.707 [2024-10-08 10:48:03.169324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.708 [2024-10-08 10:48:03.169352] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:42.708 [2024-10-08 10:48:03.169361] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:46.947 [2024-10-08 10:48:06.923169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.948 [2024-10-08 10:48:06.923235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:46.948 [2024-10-08 10:48:06.923253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3753.799 ms 00:16:46.948 [2024-10-08 10:48:06.923264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.948 [2024-10-08 10:48:06.932334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.948 [2024-10-08 10:48:06.932381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:46.948 [2024-10-08 10:48:06.932392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.990 ms 00:16:46.948 [2024-10-08 10:48:06.932405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.948 [2024-10-08 10:48:06.932490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.948 [2024-10-08 10:48:06.932500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:46.948 [2024-10-08 10:48:06.932511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:16:46.948 [2024-10-08 10:48:06.932521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.948 [2024-10-08 10:48:06.941023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.948 [2024-10-08 10:48:06.941066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:46.948 [2024-10-08 10:48:06.941080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.459 ms 00:16:46.948 [2024-10-08 10:48:06.941102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.948 [2024-10-08 10:48:06.941128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.948 [2024-10-08 10:48:06.941141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:46.948 [2024-10-08 10:48:06.941149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:46.948 [2024-10-08 10:48:06.941158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.948 [2024-10-08 10:48:06.941537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.948 [2024-10-08 10:48:06.941557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:46.948 [2024-10-08 10:48:06.941567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:16:46.948 [2024-10-08 10:48:06.941579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.948 [2024-10-08 10:48:06.941683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.948 [2024-10-08 10:48:06.941695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:46.948 [2024-10-08 10:48:06.941707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:16:46.948 [2024-10-08 10:48:06.941717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.948 [2024-10-08 10:48:06.957621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.948 [2024-10-08 10:48:06.957684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:46.948 [2024-10-08 10:48:06.957699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.881 ms 00:16:46.948 [2024-10-08 10:48:06.957712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.948 [2024-10-08 10:48:06.968377] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:16:46.948 [2024-10-08 10:48:06.971676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.948 [2024-10-08 10:48:06.971850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:46.948 [2024-10-08 10:48:06.971873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.825 ms 00:16:46.948 [2024-10-08 10:48:06.971881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.948 [2024-10-08 10:48:07.032865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.948 [2024-10-08 10:48:07.032916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:46.948 [2024-10-08 10:48:07.032933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.950 ms 00:16:46.948 [2024-10-08 10:48:07.032945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.948 [2024-10-08 10:48:07.033142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.948 [2024-10-08 10:48:07.033159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:46.948 [2024-10-08 10:48:07.033171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:16:46.948 [2024-10-08 10:48:07.033179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.948 [2024-10-08 10:48:07.038163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.948 [2024-10-08 10:48:07.038204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:46.948 [2024-10-08 10:48:07.038217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.933 ms 00:16:46.948 [2024-10-08 10:48:07.038225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.948 [2024-10-08 10:48:07.042396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.948 [2024-10-08 10:48:07.042436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:46.948 [2024-10-08 10:48:07.042449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.117 ms 00:16:46.948 [2024-10-08 10:48:07.042456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.948 [2024-10-08 10:48:07.042778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.948 [2024-10-08 10:48:07.042788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:46.948 [2024-10-08 10:48:07.042828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:16:46.948 [2024-10-08 10:48:07.042840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.948 [2024-10-08 10:48:07.076166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.948 [2024-10-08 10:48:07.076208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:46.948 [2024-10-08 10:48:07.076221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.299 ms 00:16:46.948 [2024-10-08 10:48:07.076229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.948 [2024-10-08 10:48:07.082095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.948 [2024-10-08 10:48:07.082253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:46.948 [2024-10-08 10:48:07.082275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.805 ms 00:16:46.948 [2024-10-08 10:48:07.082284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.948 [2024-10-08 10:48:07.087037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.948 [2024-10-08 10:48:07.087076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:46.948 [2024-10-08 10:48:07.087088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.680 ms 00:16:46.948 [2024-10-08 10:48:07.087095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.948 [2024-10-08 10:48:07.092213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.948 [2024-10-08 10:48:07.092255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:46.948 [2024-10-08 10:48:07.092270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.073 ms 00:16:46.948 [2024-10-08 10:48:07.092278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.948 [2024-10-08 10:48:07.092323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.948 [2024-10-08 10:48:07.092332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:46.948 [2024-10-08 10:48:07.092344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:46.948 [2024-10-08 10:48:07.092356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.948 [2024-10-08 10:48:07.092440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:46.948 [2024-10-08 10:48:07.092450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:46.948 [2024-10-08 10:48:07.092461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:16:46.948 [2024-10-08 10:48:07.092470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:46.948 [2024-10-08 10:48:07.093512] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3932.713 ms, result 0 00:16:46.948 { 00:16:46.948 "name": "ftl0", 00:16:46.948 "uuid": "50a8d519-5012-4166-b574-4fc22bad979e" 00:16:46.948 } 00:16:46.948 10:48:07 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:16:46.948 10:48:07 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:46.948 10:48:07 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:16:46.948 10:48:07 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:47.211 [2024-10-08 10:48:07.525060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.211 [2024-10-08 10:48:07.525145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:47.211 [2024-10-08 10:48:07.525167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:47.211 [2024-10-08 10:48:07.525179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.211 [2024-10-08 10:48:07.525207] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:47.211 [2024-10-08 10:48:07.526014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.211 [2024-10-08 10:48:07.526071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:47.211 [2024-10-08 10:48:07.526090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.779 ms 00:16:47.211 [2024-10-08 10:48:07.526100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.211 [2024-10-08 10:48:07.526374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.211 [2024-10-08 10:48:07.526386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:47.211 [2024-10-08 10:48:07.526403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:16:47.211 [2024-10-08 10:48:07.526412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.211 [2024-10-08 10:48:07.529680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.211 [2024-10-08 10:48:07.529706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:47.211 [2024-10-08 10:48:07.529720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.248 ms 00:16:47.211 [2024-10-08 10:48:07.529728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.211 [2024-10-08 10:48:07.535974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.211 [2024-10-08 10:48:07.536173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:47.211 [2024-10-08 10:48:07.536201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.222 ms 00:16:47.211 [2024-10-08 10:48:07.536209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.211 [2024-10-08 10:48:07.538306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.211 [2024-10-08 10:48:07.538358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:47.211 [2024-10-08 10:48:07.538371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.995 ms 00:16:47.211 [2024-10-08 10:48:07.538379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.211 [2024-10-08 10:48:07.543822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.211 [2024-10-08 10:48:07.543871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:47.211 [2024-10-08 10:48:07.543885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.371 ms 00:16:47.211 [2024-10-08 10:48:07.543893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.211 [2024-10-08 10:48:07.544033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.211 [2024-10-08 10:48:07.544044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:47.212 [2024-10-08 10:48:07.544056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:16:47.212 [2024-10-08 10:48:07.544064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.212 [2024-10-08 10:48:07.547022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.212 [2024-10-08 10:48:07.547205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:47.212 [2024-10-08 10:48:07.547232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.927 ms 00:16:47.212 [2024-10-08 10:48:07.547240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.212 [2024-10-08 10:48:07.550191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.212 [2024-10-08 10:48:07.550239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:47.212 [2024-10-08 10:48:07.550251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.858 ms 00:16:47.212 [2024-10-08 10:48:07.550259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.212 [2024-10-08 10:48:07.552503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.212 [2024-10-08 10:48:07.552550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:47.212 [2024-10-08 10:48:07.552564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.190 ms 00:16:47.212 [2024-10-08 10:48:07.552571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.212 [2024-10-08 10:48:07.554663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.212 [2024-10-08 10:48:07.554711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:47.212 [2024-10-08 10:48:07.554723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.994 ms 00:16:47.212 [2024-10-08 10:48:07.554730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.212 [2024-10-08 10:48:07.554775] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:47.212 [2024-10-08 10:48:07.554792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.554819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.554827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.554843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.554850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.554860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.554868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.554878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.554885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.554896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.554904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.554913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.554921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.554931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.554941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.554950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.554957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.554967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.554975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.554987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.554995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:47.212 [2024-10-08 10:48:07.555531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:47.213 [2024-10-08 10:48:07.555538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:47.213 [2024-10-08 10:48:07.555549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:47.213 [2024-10-08 10:48:07.555556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:47.213 [2024-10-08 10:48:07.555568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:47.213 [2024-10-08 10:48:07.555576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:47.213 [2024-10-08 10:48:07.555586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:47.213 [2024-10-08 10:48:07.555593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:47.213 [2024-10-08 10:48:07.555603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:47.213 [2024-10-08 10:48:07.555610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:47.213 [2024-10-08 10:48:07.555621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:47.213 [2024-10-08 10:48:07.555628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:47.213 [2024-10-08 10:48:07.555639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:47.213 [2024-10-08 10:48:07.555647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:47.213 [2024-10-08 10:48:07.555657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:47.213 [2024-10-08 10:48:07.555665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:47.213 [2024-10-08 10:48:07.555675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:47.213 [2024-10-08 10:48:07.555683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:47.213 [2024-10-08 10:48:07.555693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:47.213 [2024-10-08 10:48:07.555701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:47.213 [2024-10-08 10:48:07.555713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:47.213 [2024-10-08 10:48:07.555728] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:47.213 [2024-10-08 10:48:07.555738] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 50a8d519-5012-4166-b574-4fc22bad979e 00:16:47.213 [2024-10-08 10:48:07.555746] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:47.213 [2024-10-08 10:48:07.555757] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:47.213 [2024-10-08 10:48:07.555764] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:47.213 [2024-10-08 10:48:07.555775] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:47.213 [2024-10-08 10:48:07.555783] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:47.213 [2024-10-08 10:48:07.555807] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:47.213 [2024-10-08 10:48:07.555815] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:47.213 [2024-10-08 10:48:07.555823] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:47.213 [2024-10-08 10:48:07.555837] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:47.213 [2024-10-08 10:48:07.555847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.213 [2024-10-08 10:48:07.555860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:47.213 [2024-10-08 10:48:07.555872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.073 ms 00:16:47.213 [2024-10-08 10:48:07.555879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.213 [2024-10-08 10:48:07.558306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.213 [2024-10-08 10:48:07.558341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:47.213 [2024-10-08 10:48:07.558353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.398 ms 00:16:47.213 [2024-10-08 10:48:07.558362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.213 [2024-10-08 10:48:07.558486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:47.213 [2024-10-08 10:48:07.558495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:47.213 [2024-10-08 10:48:07.558506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:16:47.213 [2024-10-08 10:48:07.558514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.213 [2024-10-08 10:48:07.566909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.213 [2024-10-08 10:48:07.566956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:47.213 [2024-10-08 10:48:07.566970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.213 [2024-10-08 10:48:07.566978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.213 [2024-10-08 10:48:07.567049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.213 [2024-10-08 10:48:07.567059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:47.213 [2024-10-08 10:48:07.567070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.213 [2024-10-08 10:48:07.567083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.213 [2024-10-08 10:48:07.567151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.213 [2024-10-08 10:48:07.567161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:47.213 [2024-10-08 10:48:07.567171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.213 [2024-10-08 10:48:07.567178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.213 [2024-10-08 10:48:07.567198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.213 [2024-10-08 10:48:07.567210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:47.213 [2024-10-08 10:48:07.567224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.213 [2024-10-08 10:48:07.567233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.213 [2024-10-08 10:48:07.580560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.213 [2024-10-08 10:48:07.580610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:47.213 [2024-10-08 10:48:07.580624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.213 [2024-10-08 10:48:07.580633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.213 [2024-10-08 10:48:07.591333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.213 [2024-10-08 10:48:07.591380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:47.213 [2024-10-08 10:48:07.591394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.213 [2024-10-08 10:48:07.591406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.213 [2024-10-08 10:48:07.591484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.213 [2024-10-08 10:48:07.591494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:47.213 [2024-10-08 10:48:07.591505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.213 [2024-10-08 10:48:07.591512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.213 [2024-10-08 10:48:07.591560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.213 [2024-10-08 10:48:07.591570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:47.213 [2024-10-08 10:48:07.591583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.213 [2024-10-08 10:48:07.591590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.213 [2024-10-08 10:48:07.591661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.213 [2024-10-08 10:48:07.591670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:47.213 [2024-10-08 10:48:07.591680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.213 [2024-10-08 10:48:07.591688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.213 [2024-10-08 10:48:07.591727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.213 [2024-10-08 10:48:07.591737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:47.213 [2024-10-08 10:48:07.591747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.213 [2024-10-08 10:48:07.591758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.213 [2024-10-08 10:48:07.591827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.213 [2024-10-08 10:48:07.591837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:47.213 [2024-10-08 10:48:07.591847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.213 [2024-10-08 10:48:07.591855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.213 [2024-10-08 10:48:07.591903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:47.213 [2024-10-08 10:48:07.591913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:47.213 [2024-10-08 10:48:07.591925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:47.213 [2024-10-08 10:48:07.591934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:47.213 [2024-10-08 10:48:07.592071] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 66.979 ms, result 0 00:16:47.213 true 00:16:47.213 10:48:07 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 87300 00:16:47.213 10:48:07 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 87300 ']' 00:16:47.213 10:48:07 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 87300 00:16:47.213 10:48:07 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:16:47.213 10:48:07 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:47.213 10:48:07 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87300 00:16:47.213 killing process with pid 87300 00:16:47.213 10:48:07 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:47.213 10:48:07 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:47.213 10:48:07 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87300' 00:16:47.213 10:48:07 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 87300 00:16:47.213 10:48:07 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 87300 00:16:52.507 10:48:12 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:16:55.809 262144+0 records in 00:16:55.809 262144+0 records out 00:16:55.809 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.74647 s, 287 MB/s 00:16:55.809 10:48:15 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:16:57.723 10:48:18 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:57.723 [2024-10-08 10:48:18.242023] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:16:57.723 [2024-10-08 10:48:18.242152] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87519 ] 00:16:57.982 [2024-10-08 10:48:18.372482] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:57.983 [2024-10-08 10:48:18.393901] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.983 [2024-10-08 10:48:18.438268] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.983 [2024-10-08 10:48:18.549910] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:57.983 [2024-10-08 10:48:18.549991] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:58.245 [2024-10-08 10:48:18.711173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.245 [2024-10-08 10:48:18.711230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:58.245 [2024-10-08 10:48:18.711248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:58.245 [2024-10-08 10:48:18.711261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.245 [2024-10-08 10:48:18.711318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.245 [2024-10-08 10:48:18.711332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:58.245 [2024-10-08 10:48:18.711341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:16:58.245 [2024-10-08 10:48:18.711349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.245 [2024-10-08 10:48:18.711378] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:58.245 [2024-10-08 10:48:18.711643] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:58.245 [2024-10-08 10:48:18.711663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.245 [2024-10-08 10:48:18.711674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:58.245 [2024-10-08 10:48:18.711684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:16:58.245 [2024-10-08 10:48:18.711694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.245 [2024-10-08 10:48:18.713357] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:58.245 [2024-10-08 10:48:18.716962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.245 [2024-10-08 10:48:18.717011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:58.245 [2024-10-08 10:48:18.717023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.607 ms 00:16:58.245 [2024-10-08 10:48:18.717031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.245 [2024-10-08 10:48:18.717208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.245 [2024-10-08 10:48:18.717234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:58.245 [2024-10-08 10:48:18.717248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:16:58.245 [2024-10-08 10:48:18.717256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.245 [2024-10-08 10:48:18.725203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.245 [2024-10-08 10:48:18.725246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:58.245 [2024-10-08 10:48:18.725256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.891 ms 00:16:58.245 [2024-10-08 10:48:18.725275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.245 [2024-10-08 10:48:18.725356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.245 [2024-10-08 10:48:18.725365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:58.245 [2024-10-08 10:48:18.725374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:16:58.245 [2024-10-08 10:48:18.725381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.245 [2024-10-08 10:48:18.725437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.245 [2024-10-08 10:48:18.725447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:58.245 [2024-10-08 10:48:18.725459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:58.245 [2024-10-08 10:48:18.725467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.245 [2024-10-08 10:48:18.725492] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:58.245 [2024-10-08 10:48:18.727509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.245 [2024-10-08 10:48:18.727691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:58.245 [2024-10-08 10:48:18.727708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.022 ms 00:16:58.245 [2024-10-08 10:48:18.727717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.245 [2024-10-08 10:48:18.727762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.245 [2024-10-08 10:48:18.727771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:58.245 [2024-10-08 10:48:18.727779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:58.245 [2024-10-08 10:48:18.727791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.245 [2024-10-08 10:48:18.727854] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:58.245 [2024-10-08 10:48:18.727875] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:58.245 [2024-10-08 10:48:18.727914] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:58.245 [2024-10-08 10:48:18.727929] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:58.245 [2024-10-08 10:48:18.728035] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:58.245 [2024-10-08 10:48:18.728046] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:58.245 [2024-10-08 10:48:18.728057] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:58.245 [2024-10-08 10:48:18.728074] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:58.245 [2024-10-08 10:48:18.728084] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:58.245 [2024-10-08 10:48:18.728093] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:58.245 [2024-10-08 10:48:18.728100] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:58.245 [2024-10-08 10:48:18.728108] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:58.245 [2024-10-08 10:48:18.728116] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:58.245 [2024-10-08 10:48:18.728124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.245 [2024-10-08 10:48:18.728133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:58.245 [2024-10-08 10:48:18.728140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:16:58.245 [2024-10-08 10:48:18.728150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.245 [2024-10-08 10:48:18.728235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.245 [2024-10-08 10:48:18.728248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:58.245 [2024-10-08 10:48:18.728257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:58.245 [2024-10-08 10:48:18.728265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.245 [2024-10-08 10:48:18.728366] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:58.245 [2024-10-08 10:48:18.728378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:58.245 [2024-10-08 10:48:18.728387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:58.245 [2024-10-08 10:48:18.728396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.245 [2024-10-08 10:48:18.728405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:58.245 [2024-10-08 10:48:18.728413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:58.245 [2024-10-08 10:48:18.728421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:58.245 [2024-10-08 10:48:18.728429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:58.245 [2024-10-08 10:48:18.728446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:58.245 [2024-10-08 10:48:18.728453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:58.245 [2024-10-08 10:48:18.728461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:58.245 [2024-10-08 10:48:18.728472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:58.245 [2024-10-08 10:48:18.728480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:58.245 [2024-10-08 10:48:18.728489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:58.246 [2024-10-08 10:48:18.728497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:58.246 [2024-10-08 10:48:18.728505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.246 [2024-10-08 10:48:18.728513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:58.246 [2024-10-08 10:48:18.728521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:58.246 [2024-10-08 10:48:18.728529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.246 [2024-10-08 10:48:18.728538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:58.246 [2024-10-08 10:48:18.728545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:58.246 [2024-10-08 10:48:18.728553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:58.246 [2024-10-08 10:48:18.728562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:58.246 [2024-10-08 10:48:18.728570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:58.246 [2024-10-08 10:48:18.728578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:58.246 [2024-10-08 10:48:18.728586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:58.246 [2024-10-08 10:48:18.728594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:58.246 [2024-10-08 10:48:18.728605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:58.246 [2024-10-08 10:48:18.728612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:58.246 [2024-10-08 10:48:18.728619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:58.246 [2024-10-08 10:48:18.728625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:58.246 [2024-10-08 10:48:18.728633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:58.246 [2024-10-08 10:48:18.728640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:58.246 [2024-10-08 10:48:18.728646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:58.246 [2024-10-08 10:48:18.728653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:58.246 [2024-10-08 10:48:18.728660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:58.246 [2024-10-08 10:48:18.728666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:58.246 [2024-10-08 10:48:18.728673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:58.246 [2024-10-08 10:48:18.728680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:58.246 [2024-10-08 10:48:18.728687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.246 [2024-10-08 10:48:18.728693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:58.246 [2024-10-08 10:48:18.728700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:58.246 [2024-10-08 10:48:18.728707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.246 [2024-10-08 10:48:18.728718] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:58.246 [2024-10-08 10:48:18.728730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:58.246 [2024-10-08 10:48:18.728740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:58.246 [2024-10-08 10:48:18.728748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.246 [2024-10-08 10:48:18.728757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:58.246 [2024-10-08 10:48:18.728764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:58.246 [2024-10-08 10:48:18.728771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:58.246 [2024-10-08 10:48:18.728778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:58.246 [2024-10-08 10:48:18.728784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:58.246 [2024-10-08 10:48:18.728791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:58.246 [2024-10-08 10:48:18.728814] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:58.246 [2024-10-08 10:48:18.728824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:58.246 [2024-10-08 10:48:18.728833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:58.246 [2024-10-08 10:48:18.728842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:58.246 [2024-10-08 10:48:18.728851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:58.246 [2024-10-08 10:48:18.728858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:58.246 [2024-10-08 10:48:18.728868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:58.246 [2024-10-08 10:48:18.728875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:58.246 [2024-10-08 10:48:18.728882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:58.246 [2024-10-08 10:48:18.728890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:58.246 [2024-10-08 10:48:18.728898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:58.246 [2024-10-08 10:48:18.728906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:58.246 [2024-10-08 10:48:18.728913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:58.246 [2024-10-08 10:48:18.728921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:58.246 [2024-10-08 10:48:18.728928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:58.246 [2024-10-08 10:48:18.728936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:58.246 [2024-10-08 10:48:18.728944] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:58.246 [2024-10-08 10:48:18.728954] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:58.246 [2024-10-08 10:48:18.728962] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:58.246 [2024-10-08 10:48:18.728970] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:58.246 [2024-10-08 10:48:18.728978] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:58.246 [2024-10-08 10:48:18.728985] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:58.246 [2024-10-08 10:48:18.728995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.246 [2024-10-08 10:48:18.729004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:58.246 [2024-10-08 10:48:18.729012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.695 ms 00:16:58.246 [2024-10-08 10:48:18.729019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.246 [2024-10-08 10:48:18.762995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.246 [2024-10-08 10:48:18.763343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:58.246 [2024-10-08 10:48:18.763554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.924 ms 00:16:58.246 [2024-10-08 10:48:18.763626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.246 [2024-10-08 10:48:18.763929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.246 [2024-10-08 10:48:18.764014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:58.246 [2024-10-08 10:48:18.764148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:16:58.246 [2024-10-08 10:48:18.764212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.246 [2024-10-08 10:48:18.776057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.246 [2024-10-08 10:48:18.776214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:58.246 [2024-10-08 10:48:18.776271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.586 ms 00:16:58.246 [2024-10-08 10:48:18.776303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.246 [2024-10-08 10:48:18.776353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.246 [2024-10-08 10:48:18.776377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:58.246 [2024-10-08 10:48:18.776399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:58.246 [2024-10-08 10:48:18.776418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.246 [2024-10-08 10:48:18.776996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.246 [2024-10-08 10:48:18.777179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:58.246 [2024-10-08 10:48:18.777336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.505 ms 00:16:58.246 [2024-10-08 10:48:18.777376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.246 [2024-10-08 10:48:18.777963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.246 [2024-10-08 10:48:18.778096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:58.246 [2024-10-08 10:48:18.778160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:16:58.246 [2024-10-08 10:48:18.778184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.246 [2024-10-08 10:48:18.784943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.246 [2024-10-08 10:48:18.785105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:58.246 [2024-10-08 10:48:18.785177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.717 ms 00:16:58.246 [2024-10-08 10:48:18.785201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.246 [2024-10-08 10:48:18.788964] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:58.246 [2024-10-08 10:48:18.789157] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:58.246 [2024-10-08 10:48:18.789228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.246 [2024-10-08 10:48:18.789250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:58.246 [2024-10-08 10:48:18.789270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.914 ms 00:16:58.246 [2024-10-08 10:48:18.789299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.246 [2024-10-08 10:48:18.804922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.246 [2024-10-08 10:48:18.805081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:58.246 [2024-10-08 10:48:18.805160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.565 ms 00:16:58.246 [2024-10-08 10:48:18.805183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.246 [2024-10-08 10:48:18.807745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.246 [2024-10-08 10:48:18.807920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:58.246 [2024-10-08 10:48:18.807987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.503 ms 00:16:58.246 [2024-10-08 10:48:18.808010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.247 [2024-10-08 10:48:18.810187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.247 [2024-10-08 10:48:18.810332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:58.247 [2024-10-08 10:48:18.810387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.073 ms 00:16:58.247 [2024-10-08 10:48:18.810410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.247 [2024-10-08 10:48:18.810817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.247 [2024-10-08 10:48:18.810876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:58.247 [2024-10-08 10:48:18.810990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:16:58.247 [2024-10-08 10:48:18.811002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.509 [2024-10-08 10:48:18.833290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.509 [2024-10-08 10:48:18.833478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:58.509 [2024-10-08 10:48:18.833541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.264 ms 00:16:58.509 [2024-10-08 10:48:18.833565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.509 [2024-10-08 10:48:18.841625] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:16:58.509 [2024-10-08 10:48:18.844475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.509 [2024-10-08 10:48:18.844596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:58.509 [2024-10-08 10:48:18.844660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.799 ms 00:16:58.509 [2024-10-08 10:48:18.844687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.509 [2024-10-08 10:48:18.844771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.509 [2024-10-08 10:48:18.844811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:58.509 [2024-10-08 10:48:18.844834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:58.509 [2024-10-08 10:48:18.844854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.509 [2024-10-08 10:48:18.844992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.509 [2024-10-08 10:48:18.845021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:58.509 [2024-10-08 10:48:18.845044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:58.509 [2024-10-08 10:48:18.845064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.509 [2024-10-08 10:48:18.845104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.509 [2024-10-08 10:48:18.845308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:58.509 [2024-10-08 10:48:18.845329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:58.509 [2024-10-08 10:48:18.845348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.509 [2024-10-08 10:48:18.845399] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:58.509 [2024-10-08 10:48:18.845474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.509 [2024-10-08 10:48:18.845499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:58.509 [2024-10-08 10:48:18.845520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:16:58.509 [2024-10-08 10:48:18.845539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.509 [2024-10-08 10:48:18.850933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.509 [2024-10-08 10:48:18.851084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:58.509 [2024-10-08 10:48:18.851139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.358 ms 00:16:58.509 [2024-10-08 10:48:18.851162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.509 [2024-10-08 10:48:18.851327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.509 [2024-10-08 10:48:18.851383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:58.509 [2024-10-08 10:48:18.851411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:58.509 [2024-10-08 10:48:18.851434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.509 [2024-10-08 10:48:18.852568] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 140.945 ms, result 0 00:16:59.451  [2024-10-08T10:48:20.970Z] Copying: 12/1024 [MB] (12 MBps) [2024-10-08T10:48:21.910Z] Copying: 29/1024 [MB] (17 MBps) [2024-10-08T10:48:23.293Z] Copying: 47/1024 [MB] (18 MBps) [2024-10-08T10:48:24.226Z] Copying: 62/1024 [MB] (14 MBps) [2024-10-08T10:48:25.161Z] Copying: 88/1024 [MB] (26 MBps) [2024-10-08T10:48:26.101Z] Copying: 124/1024 [MB] (35 MBps) [2024-10-08T10:48:27.075Z] Copying: 142/1024 [MB] (18 MBps) [2024-10-08T10:48:28.017Z] Copying: 152/1024 [MB] (10 MBps) [2024-10-08T10:48:28.954Z] Copying: 163/1024 [MB] (10 MBps) [2024-10-08T10:48:29.886Z] Copying: 177/1024 [MB] (14 MBps) [2024-10-08T10:48:31.269Z] Copying: 207/1024 [MB] (30 MBps) [2024-10-08T10:48:32.212Z] Copying: 232/1024 [MB] (24 MBps) [2024-10-08T10:48:33.153Z] Copying: 243/1024 [MB] (11 MBps) [2024-10-08T10:48:34.094Z] Copying: 256/1024 [MB] (12 MBps) [2024-10-08T10:48:35.037Z] Copying: 268/1024 [MB] (11 MBps) [2024-10-08T10:48:35.977Z] Copying: 284584/1048576 [kB] (10148 kBps) [2024-10-08T10:48:36.911Z] Copying: 289/1024 [MB] (11 MBps) [2024-10-08T10:48:38.290Z] Copying: 321/1024 [MB] (31 MBps) [2024-10-08T10:48:39.231Z] Copying: 338/1024 [MB] (17 MBps) [2024-10-08T10:48:40.165Z] Copying: 372/1024 [MB] (33 MBps) [2024-10-08T10:48:41.106Z] Copying: 405/1024 [MB] (33 MBps) [2024-10-08T10:48:42.037Z] Copying: 417/1024 [MB] (11 MBps) [2024-10-08T10:48:42.970Z] Copying: 444/1024 [MB] (26 MBps) [2024-10-08T10:48:43.902Z] Copying: 472/1024 [MB] (27 MBps) [2024-10-08T10:48:44.876Z] Copying: 500/1024 [MB] (28 MBps) [2024-10-08T10:48:46.262Z] Copying: 521/1024 [MB] (21 MBps) [2024-10-08T10:48:47.205Z] Copying: 531/1024 [MB] (10 MBps) [2024-10-08T10:48:48.149Z] Copying: 553/1024 [MB] (21 MBps) [2024-10-08T10:48:49.102Z] Copying: 572/1024 [MB] (19 MBps) [2024-10-08T10:48:50.044Z] Copying: 587/1024 [MB] (14 MBps) [2024-10-08T10:48:50.986Z] Copying: 605/1024 [MB] (18 MBps) [2024-10-08T10:48:51.934Z] Copying: 622/1024 [MB] (16 MBps) [2024-10-08T10:48:52.876Z] Copying: 637/1024 [MB] (14 MBps) [2024-10-08T10:48:54.261Z] Copying: 658/1024 [MB] (21 MBps) [2024-10-08T10:48:55.207Z] Copying: 672/1024 [MB] (13 MBps) [2024-10-08T10:48:56.152Z] Copying: 685/1024 [MB] (13 MBps) [2024-10-08T10:48:57.096Z] Copying: 707/1024 [MB] (22 MBps) [2024-10-08T10:48:58.038Z] Copying: 723/1024 [MB] (15 MBps) [2024-10-08T10:48:58.977Z] Copying: 737/1024 [MB] (13 MBps) [2024-10-08T10:48:59.920Z] Copying: 749/1024 [MB] (11 MBps) [2024-10-08T10:49:01.305Z] Copying: 762/1024 [MB] (13 MBps) [2024-10-08T10:49:01.963Z] Copying: 775/1024 [MB] (12 MBps) [2024-10-08T10:49:02.907Z] Copying: 791/1024 [MB] (15 MBps) [2024-10-08T10:49:04.297Z] Copying: 806/1024 [MB] (15 MBps) [2024-10-08T10:49:04.871Z] Copying: 818/1024 [MB] (12 MBps) [2024-10-08T10:49:06.260Z] Copying: 830/1024 [MB] (11 MBps) [2024-10-08T10:49:07.203Z] Copying: 841/1024 [MB] (11 MBps) [2024-10-08T10:49:08.148Z] Copying: 853/1024 [MB] (11 MBps) [2024-10-08T10:49:09.093Z] Copying: 864/1024 [MB] (11 MBps) [2024-10-08T10:49:10.040Z] Copying: 874/1024 [MB] (10 MBps) [2024-10-08T10:49:10.986Z] Copying: 885/1024 [MB] (10 MBps) [2024-10-08T10:49:11.926Z] Copying: 895/1024 [MB] (10 MBps) [2024-10-08T10:49:12.866Z] Copying: 911/1024 [MB] (15 MBps) [2024-10-08T10:49:14.244Z] Copying: 929/1024 [MB] (18 MBps) [2024-10-08T10:49:15.182Z] Copying: 943/1024 [MB] (13 MBps) [2024-10-08T10:49:16.121Z] Copying: 956/1024 [MB] (12 MBps) [2024-10-08T10:49:17.057Z] Copying: 970/1024 [MB] (14 MBps) [2024-10-08T10:49:17.997Z] Copying: 1003/1024 [MB] (32 MBps) [2024-10-08T10:49:18.258Z] Copying: 1019/1024 [MB] (15 MBps) [2024-10-08T10:49:18.258Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-10-08 10:49:18.172736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.681 [2024-10-08 10:49:18.172815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:57.681 [2024-10-08 10:49:18.172832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:57.681 [2024-10-08 10:49:18.172841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.681 [2024-10-08 10:49:18.172864] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:57.681 [2024-10-08 10:49:18.173658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.681 [2024-10-08 10:49:18.173684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:57.681 [2024-10-08 10:49:18.173697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.774 ms 00:17:57.681 [2024-10-08 10:49:18.173706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.681 [2024-10-08 10:49:18.176669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.681 [2024-10-08 10:49:18.176720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:57.681 [2024-10-08 10:49:18.176732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.939 ms 00:17:57.681 [2024-10-08 10:49:18.176740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.681 [2024-10-08 10:49:18.195949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.681 [2024-10-08 10:49:18.196010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:57.681 [2024-10-08 10:49:18.196022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.191 ms 00:17:57.681 [2024-10-08 10:49:18.196037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.681 [2024-10-08 10:49:18.202285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.681 [2024-10-08 10:49:18.202337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:57.681 [2024-10-08 10:49:18.202348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.206 ms 00:17:57.681 [2024-10-08 10:49:18.202356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.681 [2024-10-08 10:49:18.205273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.681 [2024-10-08 10:49:18.205321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:57.681 [2024-10-08 10:49:18.205332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.852 ms 00:17:57.681 [2024-10-08 10:49:18.205339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.681 [2024-10-08 10:49:18.210207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.681 [2024-10-08 10:49:18.210401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:57.681 [2024-10-08 10:49:18.210421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.825 ms 00:17:57.681 [2024-10-08 10:49:18.210429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.681 [2024-10-08 10:49:18.210583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.681 [2024-10-08 10:49:18.210595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:57.681 [2024-10-08 10:49:18.210605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:17:57.681 [2024-10-08 10:49:18.210613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.681 [2024-10-08 10:49:18.213783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.681 [2024-10-08 10:49:18.213846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:57.681 [2024-10-08 10:49:18.213856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.152 ms 00:17:57.681 [2024-10-08 10:49:18.213864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.681 [2024-10-08 10:49:18.216608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.681 [2024-10-08 10:49:18.216655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:57.681 [2024-10-08 10:49:18.216665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.699 ms 00:17:57.681 [2024-10-08 10:49:18.216671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.681 [2024-10-08 10:49:18.218947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.681 [2024-10-08 10:49:18.218994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:57.681 [2024-10-08 10:49:18.219016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.235 ms 00:17:57.681 [2024-10-08 10:49:18.219022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.681 [2024-10-08 10:49:18.221191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.681 [2024-10-08 10:49:18.221253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:57.681 [2024-10-08 10:49:18.221263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.103 ms 00:17:57.681 [2024-10-08 10:49:18.221270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.681 [2024-10-08 10:49:18.221309] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:57.681 [2024-10-08 10:49:18.221326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:57.681 [2024-10-08 10:49:18.221343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:57.681 [2024-10-08 10:49:18.221351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:57.681 [2024-10-08 10:49:18.221359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:57.681 [2024-10-08 10:49:18.221368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:57.681 [2024-10-08 10:49:18.221375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:57.681 [2024-10-08 10:49:18.221382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:57.681 [2024-10-08 10:49:18.221390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.221790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:57.682 [2024-10-08 10:49:18.222836] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:57.682 [2024-10-08 10:49:18.222845] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 50a8d519-5012-4166-b574-4fc22bad979e 00:17:57.682 [2024-10-08 10:49:18.222854] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:57.682 [2024-10-08 10:49:18.222862] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:57.683 [2024-10-08 10:49:18.222869] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:57.683 [2024-10-08 10:49:18.222878] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:57.683 [2024-10-08 10:49:18.222885] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:57.683 [2024-10-08 10:49:18.222894] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:57.683 [2024-10-08 10:49:18.222901] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:57.683 [2024-10-08 10:49:18.222908] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:57.683 [2024-10-08 10:49:18.222914] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:57.683 [2024-10-08 10:49:18.222922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.683 [2024-10-08 10:49:18.222931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:57.683 [2024-10-08 10:49:18.222942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.614 ms 00:17:57.683 [2024-10-08 10:49:18.222960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.683 [2024-10-08 10:49:18.225294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.683 [2024-10-08 10:49:18.225329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:57.683 [2024-10-08 10:49:18.225341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.308 ms 00:17:57.683 [2024-10-08 10:49:18.225350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.683 [2024-10-08 10:49:18.225481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.683 [2024-10-08 10:49:18.225491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:57.683 [2024-10-08 10:49:18.225515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:17:57.683 [2024-10-08 10:49:18.225523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.683 [2024-10-08 10:49:18.232386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.683 [2024-10-08 10:49:18.232561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:57.683 [2024-10-08 10:49:18.232580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.683 [2024-10-08 10:49:18.232589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.683 [2024-10-08 10:49:18.232649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.683 [2024-10-08 10:49:18.232658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:57.683 [2024-10-08 10:49:18.232673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.683 [2024-10-08 10:49:18.232681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.683 [2024-10-08 10:49:18.232728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.683 [2024-10-08 10:49:18.232738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:57.683 [2024-10-08 10:49:18.232752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.683 [2024-10-08 10:49:18.232760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.683 [2024-10-08 10:49:18.232775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.683 [2024-10-08 10:49:18.232783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:57.683 [2024-10-08 10:49:18.232838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.683 [2024-10-08 10:49:18.232849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.683 [2024-10-08 10:49:18.246653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.683 [2024-10-08 10:49:18.246703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:57.683 [2024-10-08 10:49:18.246716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.683 [2024-10-08 10:49:18.246724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.945 [2024-10-08 10:49:18.257698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.945 [2024-10-08 10:49:18.257751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:57.945 [2024-10-08 10:49:18.257770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.945 [2024-10-08 10:49:18.257779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.945 [2024-10-08 10:49:18.257854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.945 [2024-10-08 10:49:18.257864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:57.945 [2024-10-08 10:49:18.257879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.945 [2024-10-08 10:49:18.257886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.945 [2024-10-08 10:49:18.257922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.945 [2024-10-08 10:49:18.257932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:57.945 [2024-10-08 10:49:18.257941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.945 [2024-10-08 10:49:18.257949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.945 [2024-10-08 10:49:18.258048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.945 [2024-10-08 10:49:18.258058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:57.945 [2024-10-08 10:49:18.258067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.945 [2024-10-08 10:49:18.258075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.945 [2024-10-08 10:49:18.258103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.945 [2024-10-08 10:49:18.258113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:57.945 [2024-10-08 10:49:18.258122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.945 [2024-10-08 10:49:18.258131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.945 [2024-10-08 10:49:18.258174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.945 [2024-10-08 10:49:18.258184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:57.945 [2024-10-08 10:49:18.258192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.945 [2024-10-08 10:49:18.258200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.945 [2024-10-08 10:49:18.258248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.945 [2024-10-08 10:49:18.258259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:57.945 [2024-10-08 10:49:18.258267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.945 [2024-10-08 10:49:18.258276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.945 [2024-10-08 10:49:18.258421] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 85.644 ms, result 0 00:17:58.232 00:17:58.232 00:17:58.232 10:49:18 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:17:58.232 [2024-10-08 10:49:18.641322] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:17:58.232 [2024-10-08 10:49:18.641653] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88141 ] 00:17:58.232 [2024-10-08 10:49:18.775210] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:58.232 [2024-10-08 10:49:18.789987] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.492 [2024-10-08 10:49:18.841345] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.492 [2024-10-08 10:49:18.959228] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:58.492 [2024-10-08 10:49:18.959316] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:58.755 [2024-10-08 10:49:19.120413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.755 [2024-10-08 10:49:19.120474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:58.755 [2024-10-08 10:49:19.120493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:58.755 [2024-10-08 10:49:19.120502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.755 [2024-10-08 10:49:19.120560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.755 [2024-10-08 10:49:19.120570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:58.755 [2024-10-08 10:49:19.120579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:58.755 [2024-10-08 10:49:19.120587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.755 [2024-10-08 10:49:19.120611] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:58.755 [2024-10-08 10:49:19.120897] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:58.755 [2024-10-08 10:49:19.120914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.755 [2024-10-08 10:49:19.120925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:58.755 [2024-10-08 10:49:19.120936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:17:58.755 [2024-10-08 10:49:19.120947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.755 [2024-10-08 10:49:19.122742] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:58.755 [2024-10-08 10:49:19.126748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.755 [2024-10-08 10:49:19.127000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:58.755 [2024-10-08 10:49:19.127023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.008 ms 00:17:58.755 [2024-10-08 10:49:19.127032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.755 [2024-10-08 10:49:19.127136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.755 [2024-10-08 10:49:19.127150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:58.755 [2024-10-08 10:49:19.127160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:17:58.755 [2024-10-08 10:49:19.127168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.755 [2024-10-08 10:49:19.135328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.755 [2024-10-08 10:49:19.135372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:58.755 [2024-10-08 10:49:19.135382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.114 ms 00:17:58.755 [2024-10-08 10:49:19.135398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.755 [2024-10-08 10:49:19.135488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.755 [2024-10-08 10:49:19.135502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:58.755 [2024-10-08 10:49:19.135511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:17:58.755 [2024-10-08 10:49:19.135519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.755 [2024-10-08 10:49:19.135582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.755 [2024-10-08 10:49:19.135593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:58.755 [2024-10-08 10:49:19.135601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:58.755 [2024-10-08 10:49:19.135609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.755 [2024-10-08 10:49:19.135634] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:58.755 [2024-10-08 10:49:19.137733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.755 [2024-10-08 10:49:19.137774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:58.755 [2024-10-08 10:49:19.137785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.104 ms 00:17:58.755 [2024-10-08 10:49:19.137823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.755 [2024-10-08 10:49:19.137862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.755 [2024-10-08 10:49:19.137876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:58.755 [2024-10-08 10:49:19.137885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:58.755 [2024-10-08 10:49:19.137894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.755 [2024-10-08 10:49:19.137927] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:58.755 [2024-10-08 10:49:19.137948] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:58.755 [2024-10-08 10:49:19.137988] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:58.755 [2024-10-08 10:49:19.138008] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:58.755 [2024-10-08 10:49:19.138114] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:58.755 [2024-10-08 10:49:19.138125] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:58.755 [2024-10-08 10:49:19.138140] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:58.755 [2024-10-08 10:49:19.138153] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:58.755 [2024-10-08 10:49:19.138162] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:58.755 [2024-10-08 10:49:19.138177] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:58.755 [2024-10-08 10:49:19.138185] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:58.755 [2024-10-08 10:49:19.138193] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:58.755 [2024-10-08 10:49:19.138202] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:58.755 [2024-10-08 10:49:19.138210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.755 [2024-10-08 10:49:19.138217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:58.755 [2024-10-08 10:49:19.138226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:17:58.755 [2024-10-08 10:49:19.138235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.755 [2024-10-08 10:49:19.138317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.755 [2024-10-08 10:49:19.138328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:58.755 [2024-10-08 10:49:19.138336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:58.755 [2024-10-08 10:49:19.138343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.755 [2024-10-08 10:49:19.138439] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:58.755 [2024-10-08 10:49:19.138451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:58.755 [2024-10-08 10:49:19.138467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:58.755 [2024-10-08 10:49:19.138475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:58.755 [2024-10-08 10:49:19.138484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:58.755 [2024-10-08 10:49:19.138492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:58.755 [2024-10-08 10:49:19.138500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:58.755 [2024-10-08 10:49:19.138510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:58.755 [2024-10-08 10:49:19.138524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:58.755 [2024-10-08 10:49:19.138531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:58.755 [2024-10-08 10:49:19.138540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:58.755 [2024-10-08 10:49:19.138551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:58.755 [2024-10-08 10:49:19.138559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:58.755 [2024-10-08 10:49:19.138567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:58.755 [2024-10-08 10:49:19.138575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:58.755 [2024-10-08 10:49:19.138583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:58.756 [2024-10-08 10:49:19.138591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:58.756 [2024-10-08 10:49:19.138599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:58.756 [2024-10-08 10:49:19.138607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:58.756 [2024-10-08 10:49:19.138618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:58.756 [2024-10-08 10:49:19.138626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:58.756 [2024-10-08 10:49:19.138634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:58.756 [2024-10-08 10:49:19.138642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:58.756 [2024-10-08 10:49:19.138650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:58.756 [2024-10-08 10:49:19.138657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:58.756 [2024-10-08 10:49:19.138666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:58.756 [2024-10-08 10:49:19.138674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:58.756 [2024-10-08 10:49:19.138687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:58.756 [2024-10-08 10:49:19.138695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:58.756 [2024-10-08 10:49:19.138704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:58.756 [2024-10-08 10:49:19.138712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:58.756 [2024-10-08 10:49:19.138720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:58.756 [2024-10-08 10:49:19.138727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:58.756 [2024-10-08 10:49:19.138735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:58.756 [2024-10-08 10:49:19.138743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:58.756 [2024-10-08 10:49:19.138752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:58.756 [2024-10-08 10:49:19.138760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:58.756 [2024-10-08 10:49:19.138766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:58.756 [2024-10-08 10:49:19.138773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:58.756 [2024-10-08 10:49:19.138780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:58.756 [2024-10-08 10:49:19.138786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:58.756 [2024-10-08 10:49:19.139057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:58.756 [2024-10-08 10:49:19.139098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:58.756 [2024-10-08 10:49:19.139125] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:58.756 [2024-10-08 10:49:19.139148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:58.756 [2024-10-08 10:49:19.139171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:58.756 [2024-10-08 10:49:19.139191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:58.756 [2024-10-08 10:49:19.139211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:58.756 [2024-10-08 10:49:19.139230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:58.756 [2024-10-08 10:49:19.139248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:58.756 [2024-10-08 10:49:19.139267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:58.756 [2024-10-08 10:49:19.139287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:58.756 [2024-10-08 10:49:19.139356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:58.756 [2024-10-08 10:49:19.139383] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:58.756 [2024-10-08 10:49:19.139416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:58.756 [2024-10-08 10:49:19.139447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:58.756 [2024-10-08 10:49:19.139477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:58.756 [2024-10-08 10:49:19.139506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:58.756 [2024-10-08 10:49:19.139535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:58.756 [2024-10-08 10:49:19.139568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:58.756 [2024-10-08 10:49:19.139630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:58.756 [2024-10-08 10:49:19.139661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:58.756 [2024-10-08 10:49:19.139690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:58.756 [2024-10-08 10:49:19.139719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:58.756 [2024-10-08 10:49:19.139748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:58.756 [2024-10-08 10:49:19.139778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:58.756 [2024-10-08 10:49:19.140011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:58.756 [2024-10-08 10:49:19.140699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:58.756 [2024-10-08 10:49:19.140728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:58.756 [2024-10-08 10:49:19.140737] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:58.756 [2024-10-08 10:49:19.140748] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:58.756 [2024-10-08 10:49:19.140758] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:58.756 [2024-10-08 10:49:19.140765] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:58.756 [2024-10-08 10:49:19.140773] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:58.756 [2024-10-08 10:49:19.140780] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:58.756 [2024-10-08 10:49:19.140827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.756 [2024-10-08 10:49:19.140844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:58.756 [2024-10-08 10:49:19.140855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.421 ms 00:17:58.756 [2024-10-08 10:49:19.140863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.756 [2024-10-08 10:49:19.166506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.756 [2024-10-08 10:49:19.166599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:58.756 [2024-10-08 10:49:19.166624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.541 ms 00:17:58.756 [2024-10-08 10:49:19.166639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.756 [2024-10-08 10:49:19.166851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.756 [2024-10-08 10:49:19.166871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:58.756 [2024-10-08 10:49:19.166887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:17:58.756 [2024-10-08 10:49:19.166911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.756 [2024-10-08 10:49:19.179533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.756 [2024-10-08 10:49:19.179584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:58.756 [2024-10-08 10:49:19.179596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.512 ms 00:17:58.756 [2024-10-08 10:49:19.179604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.756 [2024-10-08 10:49:19.179641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.756 [2024-10-08 10:49:19.179658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:58.756 [2024-10-08 10:49:19.179667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:58.756 [2024-10-08 10:49:19.179675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.756 [2024-10-08 10:49:19.180269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.756 [2024-10-08 10:49:19.180305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:58.756 [2024-10-08 10:49:19.180318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:17:58.756 [2024-10-08 10:49:19.180326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.756 [2024-10-08 10:49:19.180479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.756 [2024-10-08 10:49:19.180497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:58.756 [2024-10-08 10:49:19.180507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:17:58.756 [2024-10-08 10:49:19.180516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.756 [2024-10-08 10:49:19.187585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.756 [2024-10-08 10:49:19.187784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:58.756 [2024-10-08 10:49:19.187818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.040 ms 00:17:58.756 [2024-10-08 10:49:19.187827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.756 [2024-10-08 10:49:19.191665] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:58.756 [2024-10-08 10:49:19.191715] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:58.756 [2024-10-08 10:49:19.191728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.756 [2024-10-08 10:49:19.191736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:58.756 [2024-10-08 10:49:19.191752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.798 ms 00:17:58.756 [2024-10-08 10:49:19.191760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.756 [2024-10-08 10:49:19.207566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.756 [2024-10-08 10:49:19.207621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:58.756 [2024-10-08 10:49:19.207633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.722 ms 00:17:58.756 [2024-10-08 10:49:19.207647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.756 [2024-10-08 10:49:19.210658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.756 [2024-10-08 10:49:19.210868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:58.756 [2024-10-08 10:49:19.210887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.950 ms 00:17:58.756 [2024-10-08 10:49:19.210895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.756 [2024-10-08 10:49:19.213547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.756 [2024-10-08 10:49:19.213597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:58.756 [2024-10-08 10:49:19.213608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.611 ms 00:17:58.757 [2024-10-08 10:49:19.213616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.757 [2024-10-08 10:49:19.214023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.757 [2024-10-08 10:49:19.214039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:58.757 [2024-10-08 10:49:19.214048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:17:58.757 [2024-10-08 10:49:19.214056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.757 [2024-10-08 10:49:19.237367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.757 [2024-10-08 10:49:19.237594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:58.757 [2024-10-08 10:49:19.237624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.288 ms 00:17:58.757 [2024-10-08 10:49:19.237634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.757 [2024-10-08 10:49:19.245829] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:58.757 [2024-10-08 10:49:19.248864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.757 [2024-10-08 10:49:19.248912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:58.757 [2024-10-08 10:49:19.248925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.112 ms 00:17:58.757 [2024-10-08 10:49:19.248936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.757 [2024-10-08 10:49:19.249018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.757 [2024-10-08 10:49:19.249031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:58.757 [2024-10-08 10:49:19.249040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:58.757 [2024-10-08 10:49:19.249048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.757 [2024-10-08 10:49:19.249115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.757 [2024-10-08 10:49:19.249127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:58.757 [2024-10-08 10:49:19.249141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:58.757 [2024-10-08 10:49:19.249153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.757 [2024-10-08 10:49:19.249173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.757 [2024-10-08 10:49:19.249184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:58.757 [2024-10-08 10:49:19.249192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:58.757 [2024-10-08 10:49:19.249201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.757 [2024-10-08 10:49:19.249257] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:58.757 [2024-10-08 10:49:19.249268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.757 [2024-10-08 10:49:19.249276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:58.757 [2024-10-08 10:49:19.249291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:58.757 [2024-10-08 10:49:19.249302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.757 [2024-10-08 10:49:19.254855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.757 [2024-10-08 10:49:19.254899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:58.757 [2024-10-08 10:49:19.254911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.532 ms 00:17:58.757 [2024-10-08 10:49:19.254919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.757 [2024-10-08 10:49:19.255006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.757 [2024-10-08 10:49:19.255020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:58.757 [2024-10-08 10:49:19.255030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:17:58.757 [2024-10-08 10:49:19.255041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.757 [2024-10-08 10:49:19.256204] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 135.304 ms, result 0 00:18:00.279  [2024-10-08T10:49:21.800Z] Copying: 20/1024 [MB] (20 MBps) [2024-10-08T10:49:22.744Z] Copying: 33/1024 [MB] (12 MBps) [2024-10-08T10:49:23.687Z] Copying: 46/1024 [MB] (13 MBps) [2024-10-08T10:49:24.628Z] Copying: 57/1024 [MB] (11 MBps) [2024-10-08T10:49:25.568Z] Copying: 78/1024 [MB] (21 MBps) [2024-10-08T10:49:26.510Z] Copying: 96/1024 [MB] (17 MBps) [2024-10-08T10:49:27.450Z] Copying: 113/1024 [MB] (16 MBps) [2024-10-08T10:49:28.834Z] Copying: 128/1024 [MB] (14 MBps) [2024-10-08T10:49:29.775Z] Copying: 141/1024 [MB] (13 MBps) [2024-10-08T10:49:30.719Z] Copying: 153/1024 [MB] (11 MBps) [2024-10-08T10:49:31.659Z] Copying: 166/1024 [MB] (12 MBps) [2024-10-08T10:49:32.598Z] Copying: 179/1024 [MB] (12 MBps) [2024-10-08T10:49:33.539Z] Copying: 190/1024 [MB] (10 MBps) [2024-10-08T10:49:34.484Z] Copying: 205/1024 [MB] (15 MBps) [2024-10-08T10:49:35.870Z] Copying: 216/1024 [MB] (10 MBps) [2024-10-08T10:49:36.443Z] Copying: 231/1024 [MB] (15 MBps) [2024-10-08T10:49:37.831Z] Copying: 245/1024 [MB] (13 MBps) [2024-10-08T10:49:38.777Z] Copying: 256/1024 [MB] (11 MBps) [2024-10-08T10:49:39.721Z] Copying: 267/1024 [MB] (10 MBps) [2024-10-08T10:49:40.665Z] Copying: 277/1024 [MB] (10 MBps) [2024-10-08T10:49:41.609Z] Copying: 293/1024 [MB] (15 MBps) [2024-10-08T10:49:42.555Z] Copying: 306/1024 [MB] (13 MBps) [2024-10-08T10:49:43.499Z] Copying: 319/1024 [MB] (12 MBps) [2024-10-08T10:49:44.442Z] Copying: 334/1024 [MB] (15 MBps) [2024-10-08T10:49:45.834Z] Copying: 352/1024 [MB] (17 MBps) [2024-10-08T10:49:46.779Z] Copying: 366/1024 [MB] (13 MBps) [2024-10-08T10:49:47.723Z] Copying: 380/1024 [MB] (14 MBps) [2024-10-08T10:49:48.666Z] Copying: 392/1024 [MB] (12 MBps) [2024-10-08T10:49:49.607Z] Copying: 406/1024 [MB] (13 MBps) [2024-10-08T10:49:50.552Z] Copying: 417/1024 [MB] (11 MBps) [2024-10-08T10:49:51.497Z] Copying: 432/1024 [MB] (14 MBps) [2024-10-08T10:49:52.489Z] Copying: 447/1024 [MB] (15 MBps) [2024-10-08T10:49:53.456Z] Copying: 459/1024 [MB] (12 MBps) [2024-10-08T10:49:54.837Z] Copying: 474/1024 [MB] (14 MBps) [2024-10-08T10:49:55.779Z] Copying: 495/1024 [MB] (21 MBps) [2024-10-08T10:49:56.723Z] Copying: 510/1024 [MB] (14 MBps) [2024-10-08T10:49:57.665Z] Copying: 524/1024 [MB] (14 MBps) [2024-10-08T10:49:58.607Z] Copying: 540/1024 [MB] (15 MBps) [2024-10-08T10:49:59.550Z] Copying: 552/1024 [MB] (12 MBps) [2024-10-08T10:50:00.495Z] Copying: 564/1024 [MB] (11 MBps) [2024-10-08T10:50:01.441Z] Copying: 575/1024 [MB] (10 MBps) [2024-10-08T10:50:02.827Z] Copying: 585/1024 [MB] (10 MBps) [2024-10-08T10:50:03.770Z] Copying: 596/1024 [MB] (10 MBps) [2024-10-08T10:50:04.708Z] Copying: 606/1024 [MB] (10 MBps) [2024-10-08T10:50:05.652Z] Copying: 627/1024 [MB] (20 MBps) [2024-10-08T10:50:06.595Z] Copying: 638/1024 [MB] (10 MBps) [2024-10-08T10:50:07.537Z] Copying: 651/1024 [MB] (12 MBps) [2024-10-08T10:50:08.479Z] Copying: 665/1024 [MB] (14 MBps) [2024-10-08T10:50:09.865Z] Copying: 685/1024 [MB] (19 MBps) [2024-10-08T10:50:10.805Z] Copying: 695/1024 [MB] (10 MBps) [2024-10-08T10:50:11.750Z] Copying: 721720/1048576 [kB] (9724 kBps) [2024-10-08T10:50:12.693Z] Copying: 721/1024 [MB] (16 MBps) [2024-10-08T10:50:13.634Z] Copying: 735/1024 [MB] (14 MBps) [2024-10-08T10:50:14.580Z] Copying: 759/1024 [MB] (24 MBps) [2024-10-08T10:50:15.525Z] Copying: 782/1024 [MB] (22 MBps) [2024-10-08T10:50:16.471Z] Copying: 793/1024 [MB] (10 MBps) [2024-10-08T10:50:17.898Z] Copying: 809/1024 [MB] (16 MBps) [2024-10-08T10:50:18.472Z] Copying: 822/1024 [MB] (12 MBps) [2024-10-08T10:50:19.858Z] Copying: 835/1024 [MB] (13 MBps) [2024-10-08T10:50:20.803Z] Copying: 859/1024 [MB] (23 MBps) [2024-10-08T10:50:21.748Z] Copying: 871/1024 [MB] (12 MBps) [2024-10-08T10:50:22.693Z] Copying: 902372/1048576 [kB] (9576 kBps) [2024-10-08T10:50:23.636Z] Copying: 911936/1048576 [kB] (9564 kBps) [2024-10-08T10:50:24.582Z] Copying: 921608/1048576 [kB] (9672 kBps) [2024-10-08T10:50:25.530Z] Copying: 931200/1048576 [kB] (9592 kBps) [2024-10-08T10:50:26.477Z] Copying: 940560/1048576 [kB] (9360 kBps) [2024-10-08T10:50:27.865Z] Copying: 949976/1048576 [kB] (9416 kBps) [2024-10-08T10:50:28.446Z] Copying: 959564/1048576 [kB] (9588 kBps) [2024-10-08T10:50:29.835Z] Copying: 969236/1048576 [kB] (9672 kBps) [2024-10-08T10:50:30.780Z] Copying: 957/1024 [MB] (11 MBps) [2024-10-08T10:50:31.725Z] Copying: 990320/1048576 [kB] (9632 kBps) [2024-10-08T10:50:32.667Z] Copying: 1000200/1048576 [kB] (9880 kBps) [2024-10-08T10:50:33.613Z] Copying: 987/1024 [MB] (10 MBps) [2024-10-08T10:50:34.597Z] Copying: 997/1024 [MB] (10 MBps) [2024-10-08T10:50:35.540Z] Copying: 1031232/1048576 [kB] (9808 kBps) [2024-10-08T10:50:36.487Z] Copying: 1041208/1048576 [kB] (9976 kBps) [2024-10-08T10:50:36.487Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-10-08 10:50:36.400202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.910 [2024-10-08 10:50:36.400288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:15.910 [2024-10-08 10:50:36.400312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:15.910 [2024-10-08 10:50:36.400322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.910 [2024-10-08 10:50:36.400353] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:15.910 [2024-10-08 10:50:36.401143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.910 [2024-10-08 10:50:36.401179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:15.910 [2024-10-08 10:50:36.401191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.772 ms 00:19:15.910 [2024-10-08 10:50:36.401201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.910 [2024-10-08 10:50:36.401487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.910 [2024-10-08 10:50:36.401499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:15.910 [2024-10-08 10:50:36.401510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:19:15.910 [2024-10-08 10:50:36.401520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.910 [2024-10-08 10:50:36.406825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.910 [2024-10-08 10:50:36.406877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:15.910 [2024-10-08 10:50:36.406892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.279 ms 00:19:15.910 [2024-10-08 10:50:36.406912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.910 [2024-10-08 10:50:36.415500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.910 [2024-10-08 10:50:36.415545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:15.910 [2024-10-08 10:50:36.415558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.563 ms 00:19:15.910 [2024-10-08 10:50:36.415566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.910 [2024-10-08 10:50:36.419071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.910 [2024-10-08 10:50:36.419279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:15.910 [2024-10-08 10:50:36.419309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.429 ms 00:19:15.910 [2024-10-08 10:50:36.419318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.910 [2024-10-08 10:50:36.425169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.910 [2024-10-08 10:50:36.425225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:15.910 [2024-10-08 10:50:36.425237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.807 ms 00:19:15.910 [2024-10-08 10:50:36.425258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.910 [2024-10-08 10:50:36.425418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.910 [2024-10-08 10:50:36.425431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:15.910 [2024-10-08 10:50:36.425442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:19:15.910 [2024-10-08 10:50:36.425452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.910 [2024-10-08 10:50:36.428894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.910 [2024-10-08 10:50:36.428945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:15.910 [2024-10-08 10:50:36.428956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.423 ms 00:19:15.910 [2024-10-08 10:50:36.428964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.911 [2024-10-08 10:50:36.432051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.911 [2024-10-08 10:50:36.432103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:15.911 [2024-10-08 10:50:36.432114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.041 ms 00:19:15.911 [2024-10-08 10:50:36.432121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.911 [2024-10-08 10:50:36.434617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.911 [2024-10-08 10:50:36.434683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:15.911 [2024-10-08 10:50:36.434695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.449 ms 00:19:15.911 [2024-10-08 10:50:36.434703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.911 [2024-10-08 10:50:36.437178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.911 [2024-10-08 10:50:36.438303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:15.911 [2024-10-08 10:50:36.438384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.402 ms 00:19:15.911 [2024-10-08 10:50:36.438411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.911 [2024-10-08 10:50:36.438540] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:15.911 [2024-10-08 10:50:36.438724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.438765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.438862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.438898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.438931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.438964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.438996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.439998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.440006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.440014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.440023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.440030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.440038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.440047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.440056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.440063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.440072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.440081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:15.911 [2024-10-08 10:50:36.440089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:15.912 [2024-10-08 10:50:36.440096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:15.912 [2024-10-08 10:50:36.440104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:15.912 [2024-10-08 10:50:36.440112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:15.912 [2024-10-08 10:50:36.440121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:15.912 [2024-10-08 10:50:36.440129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:15.912 [2024-10-08 10:50:36.440137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:15.912 [2024-10-08 10:50:36.440145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:15.912 [2024-10-08 10:50:36.440153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:15.912 [2024-10-08 10:50:36.440168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:15.912 [2024-10-08 10:50:36.440177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:15.912 [2024-10-08 10:50:36.440184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:15.912 [2024-10-08 10:50:36.440191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:15.912 [2024-10-08 10:50:36.440199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:15.912 [2024-10-08 10:50:36.440207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:15.912 [2024-10-08 10:50:36.440216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:15.912 [2024-10-08 10:50:36.440235] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:15.912 [2024-10-08 10:50:36.440244] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 50a8d519-5012-4166-b574-4fc22bad979e 00:19:15.912 [2024-10-08 10:50:36.440252] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:15.912 [2024-10-08 10:50:36.440262] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:15.912 [2024-10-08 10:50:36.440270] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:15.912 [2024-10-08 10:50:36.440278] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:15.912 [2024-10-08 10:50:36.440285] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:15.912 [2024-10-08 10:50:36.440292] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:15.912 [2024-10-08 10:50:36.440302] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:15.912 [2024-10-08 10:50:36.440309] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:15.912 [2024-10-08 10:50:36.440316] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:15.912 [2024-10-08 10:50:36.440325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.912 [2024-10-08 10:50:36.440333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:15.912 [2024-10-08 10:50:36.440357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.789 ms 00:19:15.912 [2024-10-08 10:50:36.440368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.912 [2024-10-08 10:50:36.442910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.912 [2024-10-08 10:50:36.442943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:15.912 [2024-10-08 10:50:36.442966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.517 ms 00:19:15.912 [2024-10-08 10:50:36.442975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.912 [2024-10-08 10:50:36.443095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.912 [2024-10-08 10:50:36.443115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:15.912 [2024-10-08 10:50:36.443124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:19:15.912 [2024-10-08 10:50:36.443136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.912 [2024-10-08 10:50:36.450928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.912 [2024-10-08 10:50:36.450976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:15.912 [2024-10-08 10:50:36.450986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.912 [2024-10-08 10:50:36.450995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.912 [2024-10-08 10:50:36.451065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.912 [2024-10-08 10:50:36.451082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:15.912 [2024-10-08 10:50:36.451090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.912 [2024-10-08 10:50:36.451098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.912 [2024-10-08 10:50:36.451168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.912 [2024-10-08 10:50:36.451180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:15.912 [2024-10-08 10:50:36.451188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.912 [2024-10-08 10:50:36.451196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.912 [2024-10-08 10:50:36.451212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.912 [2024-10-08 10:50:36.451221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:15.912 [2024-10-08 10:50:36.451233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.912 [2024-10-08 10:50:36.451241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.912 [2024-10-08 10:50:36.466186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.912 [2024-10-08 10:50:36.466240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:15.912 [2024-10-08 10:50:36.466252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.912 [2024-10-08 10:50:36.466263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.912 [2024-10-08 10:50:36.476974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.912 [2024-10-08 10:50:36.477213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:15.912 [2024-10-08 10:50:36.477231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.912 [2024-10-08 10:50:36.477242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.912 [2024-10-08 10:50:36.477298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.912 [2024-10-08 10:50:36.477309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:15.912 [2024-10-08 10:50:36.477317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.912 [2024-10-08 10:50:36.477326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.912 [2024-10-08 10:50:36.477386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.912 [2024-10-08 10:50:36.477396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:15.912 [2024-10-08 10:50:36.477405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.912 [2024-10-08 10:50:36.477417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.912 [2024-10-08 10:50:36.477501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.912 [2024-10-08 10:50:36.477514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:15.912 [2024-10-08 10:50:36.477523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.912 [2024-10-08 10:50:36.477532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.912 [2024-10-08 10:50:36.477560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.912 [2024-10-08 10:50:36.477575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:15.912 [2024-10-08 10:50:36.477584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.912 [2024-10-08 10:50:36.477592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.912 [2024-10-08 10:50:36.477635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.912 [2024-10-08 10:50:36.477645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:15.912 [2024-10-08 10:50:36.477654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.912 [2024-10-08 10:50:36.477662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.912 [2024-10-08 10:50:36.477708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.912 [2024-10-08 10:50:36.477720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:15.912 [2024-10-08 10:50:36.477730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.912 [2024-10-08 10:50:36.477742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.912 [2024-10-08 10:50:36.477931] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 77.697 ms, result 0 00:19:16.174 00:19:16.174 00:19:16.174 10:50:36 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:18.722 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:19:18.722 10:50:38 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:19:18.722 [2024-10-08 10:50:39.058341] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:19:18.723 [2024-10-08 10:50:39.058504] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88975 ] 00:19:18.723 [2024-10-08 10:50:39.204048] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:18.723 [2024-10-08 10:50:39.223633] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.723 [2024-10-08 10:50:39.274007] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.985 [2024-10-08 10:50:39.386538] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:18.985 [2024-10-08 10:50:39.386620] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:18.985 [2024-10-08 10:50:39.548204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.985 [2024-10-08 10:50:39.548433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:18.985 [2024-10-08 10:50:39.548469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:18.985 [2024-10-08 10:50:39.548479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.985 [2024-10-08 10:50:39.548552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.985 [2024-10-08 10:50:39.548564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:18.985 [2024-10-08 10:50:39.548574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:19:18.985 [2024-10-08 10:50:39.548582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.985 [2024-10-08 10:50:39.548607] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:18.985 [2024-10-08 10:50:39.548899] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:18.985 [2024-10-08 10:50:39.548920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.985 [2024-10-08 10:50:39.548932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:18.985 [2024-10-08 10:50:39.548943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:19:18.985 [2024-10-08 10:50:39.548957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.985 [2024-10-08 10:50:39.550719] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:18.985 [2024-10-08 10:50:39.554689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.985 [2024-10-08 10:50:39.554748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:18.985 [2024-10-08 10:50:39.554760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.973 ms 00:19:18.985 [2024-10-08 10:50:39.554769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.985 [2024-10-08 10:50:39.554869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.985 [2024-10-08 10:50:39.554887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:18.985 [2024-10-08 10:50:39.554898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:18.985 [2024-10-08 10:50:39.554905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.248 [2024-10-08 10:50:39.562939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.248 [2024-10-08 10:50:39.562980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:19.248 [2024-10-08 10:50:39.562992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.991 ms 00:19:19.248 [2024-10-08 10:50:39.563009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.248 [2024-10-08 10:50:39.563106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.248 [2024-10-08 10:50:39.563116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:19.248 [2024-10-08 10:50:39.563125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:19:19.248 [2024-10-08 10:50:39.563133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.248 [2024-10-08 10:50:39.563194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.248 [2024-10-08 10:50:39.563206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:19.248 [2024-10-08 10:50:39.563217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:19.248 [2024-10-08 10:50:39.563226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.248 [2024-10-08 10:50:39.563255] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:19.248 [2024-10-08 10:50:39.565326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.248 [2024-10-08 10:50:39.565363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:19.248 [2024-10-08 10:50:39.565392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.076 ms 00:19:19.248 [2024-10-08 10:50:39.565401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.248 [2024-10-08 10:50:39.565435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.248 [2024-10-08 10:50:39.565445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:19.248 [2024-10-08 10:50:39.565454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:19.248 [2024-10-08 10:50:39.565462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.248 [2024-10-08 10:50:39.565491] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:19.248 [2024-10-08 10:50:39.565512] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:19.248 [2024-10-08 10:50:39.565556] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:19.248 [2024-10-08 10:50:39.565573] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:19.248 [2024-10-08 10:50:39.565682] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:19.248 [2024-10-08 10:50:39.565699] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:19.248 [2024-10-08 10:50:39.565710] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:19.248 [2024-10-08 10:50:39.565725] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:19.248 [2024-10-08 10:50:39.565735] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:19.248 [2024-10-08 10:50:39.565744] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:19.248 [2024-10-08 10:50:39.565752] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:19.248 [2024-10-08 10:50:39.565760] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:19.248 [2024-10-08 10:50:39.565769] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:19.248 [2024-10-08 10:50:39.565779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.248 [2024-10-08 10:50:39.565786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:19.248 [2024-10-08 10:50:39.565817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:19:19.248 [2024-10-08 10:50:39.565828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.248 [2024-10-08 10:50:39.565912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.248 [2024-10-08 10:50:39.565926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:19.248 [2024-10-08 10:50:39.565938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:19.248 [2024-10-08 10:50:39.565971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.248 [2024-10-08 10:50:39.566076] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:19.248 [2024-10-08 10:50:39.566090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:19.248 [2024-10-08 10:50:39.566101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:19.248 [2024-10-08 10:50:39.566111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:19.248 [2024-10-08 10:50:39.566120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:19.248 [2024-10-08 10:50:39.566129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:19.248 [2024-10-08 10:50:39.566138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:19.248 [2024-10-08 10:50:39.566149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:19.248 [2024-10-08 10:50:39.566165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:19.248 [2024-10-08 10:50:39.566174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:19.248 [2024-10-08 10:50:39.566191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:19.248 [2024-10-08 10:50:39.566205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:19.248 [2024-10-08 10:50:39.566213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:19.248 [2024-10-08 10:50:39.566221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:19.248 [2024-10-08 10:50:39.566229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:19.248 [2024-10-08 10:50:39.566239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:19.248 [2024-10-08 10:50:39.566248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:19.248 [2024-10-08 10:50:39.566259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:19.248 [2024-10-08 10:50:39.566267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:19.248 [2024-10-08 10:50:39.566279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:19.248 [2024-10-08 10:50:39.566289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:19.248 [2024-10-08 10:50:39.566299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:19.248 [2024-10-08 10:50:39.566307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:19.248 [2024-10-08 10:50:39.566316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:19.248 [2024-10-08 10:50:39.566324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:19.248 [2024-10-08 10:50:39.566332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:19.248 [2024-10-08 10:50:39.566341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:19.248 [2024-10-08 10:50:39.566354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:19.248 [2024-10-08 10:50:39.566362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:19.248 [2024-10-08 10:50:39.566370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:19.248 [2024-10-08 10:50:39.566379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:19.248 [2024-10-08 10:50:39.566386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:19.248 [2024-10-08 10:50:39.566395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:19.248 [2024-10-08 10:50:39.566402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:19.248 [2024-10-08 10:50:39.566413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:19.248 [2024-10-08 10:50:39.566421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:19.248 [2024-10-08 10:50:39.566429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:19.248 [2024-10-08 10:50:39.566437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:19.248 [2024-10-08 10:50:39.566445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:19.248 [2024-10-08 10:50:39.566453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:19.248 [2024-10-08 10:50:39.566461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:19.248 [2024-10-08 10:50:39.566469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:19.248 [2024-10-08 10:50:39.566476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:19.248 [2024-10-08 10:50:39.566486] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:19.249 [2024-10-08 10:50:39.566496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:19.249 [2024-10-08 10:50:39.566508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:19.249 [2024-10-08 10:50:39.566517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:19.249 [2024-10-08 10:50:39.566527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:19.249 [2024-10-08 10:50:39.566534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:19.249 [2024-10-08 10:50:39.566543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:19.249 [2024-10-08 10:50:39.566551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:19.249 [2024-10-08 10:50:39.566559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:19.249 [2024-10-08 10:50:39.566567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:19.249 [2024-10-08 10:50:39.566577] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:19.249 [2024-10-08 10:50:39.566590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:19.249 [2024-10-08 10:50:39.566603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:19.249 [2024-10-08 10:50:39.566612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:19.249 [2024-10-08 10:50:39.566621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:19.249 [2024-10-08 10:50:39.566630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:19.249 [2024-10-08 10:50:39.566642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:19.249 [2024-10-08 10:50:39.566651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:19.249 [2024-10-08 10:50:39.566659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:19.249 [2024-10-08 10:50:39.566668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:19.249 [2024-10-08 10:50:39.566677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:19.249 [2024-10-08 10:50:39.566686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:19.249 [2024-10-08 10:50:39.566694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:19.249 [2024-10-08 10:50:39.566703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:19.249 [2024-10-08 10:50:39.566712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:19.249 [2024-10-08 10:50:39.566722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:19.249 [2024-10-08 10:50:39.566730] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:19.249 [2024-10-08 10:50:39.566742] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:19.249 [2024-10-08 10:50:39.566753] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:19.249 [2024-10-08 10:50:39.566761] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:19.249 [2024-10-08 10:50:39.566771] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:19.249 [2024-10-08 10:50:39.566779] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:19.249 [2024-10-08 10:50:39.566791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.249 [2024-10-08 10:50:39.566825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:19.249 [2024-10-08 10:50:39.566835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.781 ms 00:19:19.249 [2024-10-08 10:50:39.566845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.249 [2024-10-08 10:50:39.592230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.249 [2024-10-08 10:50:39.592313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:19.249 [2024-10-08 10:50:39.592336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.327 ms 00:19:19.249 [2024-10-08 10:50:39.592352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.249 [2024-10-08 10:50:39.592494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.249 [2024-10-08 10:50:39.592509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:19.249 [2024-10-08 10:50:39.592524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:19:19.249 [2024-10-08 10:50:39.592536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.249 [2024-10-08 10:50:39.604775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.249 [2024-10-08 10:50:39.604857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:19.249 [2024-10-08 10:50:39.604870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.138 ms 00:19:19.249 [2024-10-08 10:50:39.604878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.249 [2024-10-08 10:50:39.604915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.249 [2024-10-08 10:50:39.604924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:19.249 [2024-10-08 10:50:39.604934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:19.249 [2024-10-08 10:50:39.604942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.249 [2024-10-08 10:50:39.605507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.249 [2024-10-08 10:50:39.605540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:19.249 [2024-10-08 10:50:39.605553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.507 ms 00:19:19.249 [2024-10-08 10:50:39.605563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.249 [2024-10-08 10:50:39.605714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.249 [2024-10-08 10:50:39.605726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:19.249 [2024-10-08 10:50:39.605737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:19:19.249 [2024-10-08 10:50:39.605746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.249 [2024-10-08 10:50:39.612782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.249 [2024-10-08 10:50:39.612865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:19.249 [2024-10-08 10:50:39.612876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.010 ms 00:19:19.249 [2024-10-08 10:50:39.612884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.249 [2024-10-08 10:50:39.616840] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:19.249 [2024-10-08 10:50:39.616887] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:19.249 [2024-10-08 10:50:39.616901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.249 [2024-10-08 10:50:39.616910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:19.249 [2024-10-08 10:50:39.616930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.918 ms 00:19:19.249 [2024-10-08 10:50:39.616941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.249 [2024-10-08 10:50:39.633568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.249 [2024-10-08 10:50:39.633640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:19.249 [2024-10-08 10:50:39.633652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.561 ms 00:19:19.249 [2024-10-08 10:50:39.633661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.249 [2024-10-08 10:50:39.636964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.249 [2024-10-08 10:50:39.637012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:19.249 [2024-10-08 10:50:39.637026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.247 ms 00:19:19.249 [2024-10-08 10:50:39.637034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.249 [2024-10-08 10:50:39.639931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.249 [2024-10-08 10:50:39.640122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:19.249 [2024-10-08 10:50:39.640141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.848 ms 00:19:19.249 [2024-10-08 10:50:39.640149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.249 [2024-10-08 10:50:39.640519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.249 [2024-10-08 10:50:39.640536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:19.249 [2024-10-08 10:50:39.640547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:19:19.249 [2024-10-08 10:50:39.640556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.249 [2024-10-08 10:50:39.668121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.249 [2024-10-08 10:50:39.668180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:19.249 [2024-10-08 10:50:39.668193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.543 ms 00:19:19.249 [2024-10-08 10:50:39.668201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.249 [2024-10-08 10:50:39.676334] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:19.249 [2024-10-08 10:50:39.679380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.249 [2024-10-08 10:50:39.679579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:19.249 [2024-10-08 10:50:39.679610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.127 ms 00:19:19.249 [2024-10-08 10:50:39.679619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.249 [2024-10-08 10:50:39.679695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.249 [2024-10-08 10:50:39.679707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:19.249 [2024-10-08 10:50:39.679717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:19.249 [2024-10-08 10:50:39.679729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.249 [2024-10-08 10:50:39.679826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.249 [2024-10-08 10:50:39.679845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:19.249 [2024-10-08 10:50:39.679856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:19:19.249 [2024-10-08 10:50:39.679868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.249 [2024-10-08 10:50:39.679889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.249 [2024-10-08 10:50:39.679898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:19.249 [2024-10-08 10:50:39.679907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:19.249 [2024-10-08 10:50:39.679918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.249 [2024-10-08 10:50:39.679957] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:19.249 [2024-10-08 10:50:39.679968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.250 [2024-10-08 10:50:39.679976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:19.250 [2024-10-08 10:50:39.679984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:19.250 [2024-10-08 10:50:39.679998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.250 [2024-10-08 10:50:39.685630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.250 [2024-10-08 10:50:39.685677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:19.250 [2024-10-08 10:50:39.685698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.611 ms 00:19:19.250 [2024-10-08 10:50:39.685707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.250 [2024-10-08 10:50:39.685791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.250 [2024-10-08 10:50:39.685825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:19.250 [2024-10-08 10:50:39.685842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:19:19.250 [2024-10-08 10:50:39.685850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.250 [2024-10-08 10:50:39.686992] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 138.287 ms, result 0 00:19:20.194  [2024-10-08T10:50:41.717Z] Copying: 9740/1048576 [kB] (9740 kBps) [2024-10-08T10:50:43.103Z] Copying: 21/1024 [MB] (12 MBps) [2024-10-08T10:50:44.043Z] Copying: 33/1024 [MB] (11 MBps) [2024-10-08T10:50:44.987Z] Copying: 50/1024 [MB] (16 MBps) [2024-10-08T10:50:45.926Z] Copying: 61232/1048576 [kB] (9864 kBps) [2024-10-08T10:50:46.859Z] Copying: 73/1024 [MB] (13 MBps) [2024-10-08T10:50:47.793Z] Copying: 91/1024 [MB] (17 MBps) [2024-10-08T10:50:48.729Z] Copying: 109/1024 [MB] (17 MBps) [2024-10-08T10:50:50.114Z] Copying: 125/1024 [MB] (16 MBps) [2024-10-08T10:50:51.053Z] Copying: 138/1024 [MB] (12 MBps) [2024-10-08T10:50:51.987Z] Copying: 151/1024 [MB] (13 MBps) [2024-10-08T10:50:52.921Z] Copying: 170/1024 [MB] (18 MBps) [2024-10-08T10:50:53.860Z] Copying: 190/1024 [MB] (20 MBps) [2024-10-08T10:50:54.834Z] Copying: 205080/1048576 [kB] (10040 kBps) [2024-10-08T10:50:55.776Z] Copying: 214988/1048576 [kB] (9908 kBps) [2024-10-08T10:50:56.720Z] Copying: 220/1024 [MB] (10 MBps) [2024-10-08T10:50:58.094Z] Copying: 235528/1048576 [kB] (10024 kBps) [2024-10-08T10:50:59.026Z] Copying: 245/1024 [MB] (15 MBps) [2024-10-08T10:50:59.960Z] Copying: 262/1024 [MB] (16 MBps) [2024-10-08T10:51:00.902Z] Copying: 278/1024 [MB] (16 MBps) [2024-10-08T10:51:01.846Z] Copying: 298/1024 [MB] (20 MBps) [2024-10-08T10:51:02.789Z] Copying: 313/1024 [MB] (15 MBps) [2024-10-08T10:51:03.733Z] Copying: 323/1024 [MB] (10 MBps) [2024-10-08T10:51:05.113Z] Copying: 340880/1048576 [kB] (9328 kBps) [2024-10-08T10:51:06.053Z] Copying: 346/1024 [MB] (13 MBps) [2024-10-08T10:51:06.997Z] Copying: 375/1024 [MB] (29 MBps) [2024-10-08T10:51:07.940Z] Copying: 389/1024 [MB] (13 MBps) [2024-10-08T10:51:08.883Z] Copying: 399/1024 [MB] (10 MBps) [2024-10-08T10:51:09.816Z] Copying: 416/1024 [MB] (17 MBps) [2024-10-08T10:51:10.749Z] Copying: 434/1024 [MB] (17 MBps) [2024-10-08T10:51:12.122Z] Copying: 452/1024 [MB] (18 MBps) [2024-10-08T10:51:13.060Z] Copying: 470/1024 [MB] (18 MBps) [2024-10-08T10:51:13.996Z] Copying: 498/1024 [MB] (27 MBps) [2024-10-08T10:51:14.937Z] Copying: 511/1024 [MB] (13 MBps) [2024-10-08T10:51:15.876Z] Copying: 527/1024 [MB] (15 MBps) [2024-10-08T10:51:16.820Z] Copying: 538/1024 [MB] (11 MBps) [2024-10-08T10:51:17.774Z] Copying: 549/1024 [MB] (11 MBps) [2024-10-08T10:51:18.713Z] Copying: 568/1024 [MB] (18 MBps) [2024-10-08T10:51:20.091Z] Copying: 587/1024 [MB] (19 MBps) [2024-10-08T10:51:21.032Z] Copying: 606/1024 [MB] (19 MBps) [2024-10-08T10:51:21.967Z] Copying: 622/1024 [MB] (15 MBps) [2024-10-08T10:51:22.905Z] Copying: 643/1024 [MB] (20 MBps) [2024-10-08T10:51:23.847Z] Copying: 669/1024 [MB] (26 MBps) [2024-10-08T10:51:24.786Z] Copying: 686/1024 [MB] (16 MBps) [2024-10-08T10:51:25.730Z] Copying: 699/1024 [MB] (12 MBps) [2024-10-08T10:51:27.116Z] Copying: 715/1024 [MB] (16 MBps) [2024-10-08T10:51:28.056Z] Copying: 741800/1048576 [kB] (9188 kBps) [2024-10-08T10:51:28.996Z] Copying: 742/1024 [MB] (18 MBps) [2024-10-08T10:51:29.936Z] Copying: 759/1024 [MB] (16 MBps) [2024-10-08T10:51:30.877Z] Copying: 770/1024 [MB] (11 MBps) [2024-10-08T10:51:31.820Z] Copying: 782/1024 [MB] (11 MBps) [2024-10-08T10:51:32.763Z] Copying: 792/1024 [MB] (10 MBps) [2024-10-08T10:51:33.709Z] Copying: 803/1024 [MB] (10 MBps) [2024-10-08T10:51:35.130Z] Copying: 814/1024 [MB] (11 MBps) [2024-10-08T10:51:35.707Z] Copying: 844160/1048576 [kB] (9812 kBps) [2024-10-08T10:51:37.092Z] Copying: 835/1024 [MB] (10 MBps) [2024-10-08T10:51:38.036Z] Copying: 864944/1048576 [kB] (9588 kBps) [2024-10-08T10:51:38.982Z] Copying: 854/1024 [MB] (10 MBps) [2024-10-08T10:51:39.927Z] Copying: 884856/1048576 [kB] (9408 kBps) [2024-10-08T10:51:40.871Z] Copying: 874/1024 [MB] (10 MBps) [2024-10-08T10:51:41.815Z] Copying: 885/1024 [MB] (10 MBps) [2024-10-08T10:51:42.756Z] Copying: 895/1024 [MB] (10 MBps) [2024-10-08T10:51:43.701Z] Copying: 927288/1048576 [kB] (10072 kBps) [2024-10-08T10:51:45.090Z] Copying: 937240/1048576 [kB] (9952 kBps) [2024-10-08T10:51:46.034Z] Copying: 947024/1048576 [kB] (9784 kBps) [2024-10-08T10:51:46.984Z] Copying: 935/1024 [MB] (10 MBps) [2024-10-08T10:51:47.926Z] Copying: 967952/1048576 [kB] (9896 kBps) [2024-10-08T10:51:48.868Z] Copying: 962/1024 [MB] (17 MBps) [2024-10-08T10:51:49.812Z] Copying: 972/1024 [MB] (10 MBps) [2024-10-08T10:51:50.756Z] Copying: 990/1024 [MB] (17 MBps) [2024-10-08T10:51:51.699Z] Copying: 1009/1024 [MB] (18 MBps) [2024-10-08T10:51:51.699Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-10-08 10:51:51.415422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.122 [2024-10-08 10:51:51.415473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:31.122 [2024-10-08 10:51:51.415488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:31.122 [2024-10-08 10:51:51.415500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.122 [2024-10-08 10:51:51.415520] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:31.122 [2024-10-08 10:51:51.415990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.122 [2024-10-08 10:51:51.416015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:31.122 [2024-10-08 10:51:51.416025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.452 ms 00:20:31.122 [2024-10-08 10:51:51.416032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.122 [2024-10-08 10:51:51.418580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.122 [2024-10-08 10:51:51.418614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:31.122 [2024-10-08 10:51:51.418623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.530 ms 00:20:31.122 [2024-10-08 10:51:51.418631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.122 [2024-10-08 10:51:51.435012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.122 [2024-10-08 10:51:51.435048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:31.122 [2024-10-08 10:51:51.435059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.357 ms 00:20:31.122 [2024-10-08 10:51:51.435067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.122 [2024-10-08 10:51:51.441190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.122 [2024-10-08 10:51:51.441219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:31.122 [2024-10-08 10:51:51.441230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.099 ms 00:20:31.122 [2024-10-08 10:51:51.441238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.122 [2024-10-08 10:51:51.444008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.122 [2024-10-08 10:51:51.444040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:31.122 [2024-10-08 10:51:51.444049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.728 ms 00:20:31.122 [2024-10-08 10:51:51.444056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.122 [2024-10-08 10:51:51.448135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.122 [2024-10-08 10:51:51.448168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:31.122 [2024-10-08 10:51:51.448185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.050 ms 00:20:31.122 [2024-10-08 10:51:51.448192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.122 [2024-10-08 10:51:51.449789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.122 [2024-10-08 10:51:51.449827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:31.122 [2024-10-08 10:51:51.449836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.565 ms 00:20:31.122 [2024-10-08 10:51:51.449843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.122 [2024-10-08 10:51:51.452927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.122 [2024-10-08 10:51:51.452957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:31.122 [2024-10-08 10:51:51.452965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.064 ms 00:20:31.122 [2024-10-08 10:51:51.452972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.122 [2024-10-08 10:51:51.455419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.122 [2024-10-08 10:51:51.455448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:31.122 [2024-10-08 10:51:51.455457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.420 ms 00:20:31.122 [2024-10-08 10:51:51.455464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.122 [2024-10-08 10:51:51.457357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.122 [2024-10-08 10:51:51.457398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:31.122 [2024-10-08 10:51:51.457407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.865 ms 00:20:31.122 [2024-10-08 10:51:51.457415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.122 [2024-10-08 10:51:51.459471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.122 [2024-10-08 10:51:51.459501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:31.122 [2024-10-08 10:51:51.459509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.005 ms 00:20:31.122 [2024-10-08 10:51:51.459516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.122 [2024-10-08 10:51:51.459544] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:31.122 [2024-10-08 10:51:51.459556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 512 / 261120 wr_cnt: 1 state: open 00:20:31.122 [2024-10-08 10:51:51.459566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:31.122 [2024-10-08 10:51:51.459573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:31.122 [2024-10-08 10:51:51.459581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:31.122 [2024-10-08 10:51:51.459589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:31.122 [2024-10-08 10:51:51.459597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:31.122 [2024-10-08 10:51:51.459604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:31.122 [2024-10-08 10:51:51.459612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:31.122 [2024-10-08 10:51:51.459619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:31.122 [2024-10-08 10:51:51.459627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:31.122 [2024-10-08 10:51:51.459634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:31.122 [2024-10-08 10:51:51.459642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:31.122 [2024-10-08 10:51:51.459649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:31.122 [2024-10-08 10:51:51.459656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:31.122 [2024-10-08 10:51:51.459664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:31.122 [2024-10-08 10:51:51.459672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:31.122 [2024-10-08 10:51:51.459679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:31.122 [2024-10-08 10:51:51.459686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.459992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:31.123 [2024-10-08 10:51:51.460321] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:31.123 [2024-10-08 10:51:51.460329] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 50a8d519-5012-4166-b574-4fc22bad979e 00:20:31.123 [2024-10-08 10:51:51.460337] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 512 00:20:31.123 [2024-10-08 10:51:51.460344] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1472 00:20:31.123 [2024-10-08 10:51:51.460351] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 512 00:20:31.123 [2024-10-08 10:51:51.460359] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 2.8750 00:20:31.123 [2024-10-08 10:51:51.460372] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:31.123 [2024-10-08 10:51:51.460380] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:31.123 [2024-10-08 10:51:51.460387] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:31.123 [2024-10-08 10:51:51.460393] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:31.124 [2024-10-08 10:51:51.460399] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:31.124 [2024-10-08 10:51:51.460406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.124 [2024-10-08 10:51:51.460413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:31.124 [2024-10-08 10:51:51.460426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.864 ms 00:20:31.124 [2024-10-08 10:51:51.460433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.124 [2024-10-08 10:51:51.461928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.124 [2024-10-08 10:51:51.461952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:31.124 [2024-10-08 10:51:51.461962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.480 ms 00:20:31.124 [2024-10-08 10:51:51.461969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.124 [2024-10-08 10:51:51.462050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.124 [2024-10-08 10:51:51.462067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:31.124 [2024-10-08 10:51:51.462075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:31.124 [2024-10-08 10:51:51.462085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.124 [2024-10-08 10:51:51.466510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.124 [2024-10-08 10:51:51.466542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:31.124 [2024-10-08 10:51:51.466551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.124 [2024-10-08 10:51:51.466559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.124 [2024-10-08 10:51:51.466612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.124 [2024-10-08 10:51:51.466620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:31.124 [2024-10-08 10:51:51.466628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.124 [2024-10-08 10:51:51.466635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.124 [2024-10-08 10:51:51.466687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.124 [2024-10-08 10:51:51.466697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:31.124 [2024-10-08 10:51:51.466705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.124 [2024-10-08 10:51:51.466712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.124 [2024-10-08 10:51:51.466729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.124 [2024-10-08 10:51:51.466737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:31.124 [2024-10-08 10:51:51.466744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.124 [2024-10-08 10:51:51.466751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.124 [2024-10-08 10:51:51.475601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.124 [2024-10-08 10:51:51.475639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:31.124 [2024-10-08 10:51:51.475649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.124 [2024-10-08 10:51:51.475657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.124 [2024-10-08 10:51:51.482703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.124 [2024-10-08 10:51:51.482737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:31.124 [2024-10-08 10:51:51.482747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.124 [2024-10-08 10:51:51.482754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.124 [2024-10-08 10:51:51.482781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.124 [2024-10-08 10:51:51.482789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:31.124 [2024-10-08 10:51:51.482813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.124 [2024-10-08 10:51:51.482821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.124 [2024-10-08 10:51:51.482859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.124 [2024-10-08 10:51:51.482868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:31.124 [2024-10-08 10:51:51.482880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.124 [2024-10-08 10:51:51.482887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.124 [2024-10-08 10:51:51.482946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.124 [2024-10-08 10:51:51.482956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:31.124 [2024-10-08 10:51:51.482963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.124 [2024-10-08 10:51:51.482971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.124 [2024-10-08 10:51:51.482997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.124 [2024-10-08 10:51:51.483006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:31.124 [2024-10-08 10:51:51.483015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.124 [2024-10-08 10:51:51.483023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.124 [2024-10-08 10:51:51.483060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.124 [2024-10-08 10:51:51.483069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:31.124 [2024-10-08 10:51:51.483076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.124 [2024-10-08 10:51:51.483083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.124 [2024-10-08 10:51:51.483122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.124 [2024-10-08 10:51:51.483134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:31.124 [2024-10-08 10:51:51.483144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.124 [2024-10-08 10:51:51.483151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.124 [2024-10-08 10:51:51.483257] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 67.808 ms, result 0 00:20:31.385 00:20:31.385 00:20:31.385 10:51:51 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:20:31.385 [2024-10-08 10:51:51.960115] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:20:31.385 [2024-10-08 10:51:51.960248] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89728 ] 00:20:31.646 [2024-10-08 10:51:52.089279] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:31.646 [2024-10-08 10:51:52.109844] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.646 [2024-10-08 10:51:52.143264] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.909 [2024-10-08 10:51:52.231640] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:31.909 [2024-10-08 10:51:52.231709] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:31.909 [2024-10-08 10:51:52.390327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.909 [2024-10-08 10:51:52.390383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:31.909 [2024-10-08 10:51:52.390398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:31.909 [2024-10-08 10:51:52.390406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.909 [2024-10-08 10:51:52.390455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.909 [2024-10-08 10:51:52.390466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:31.909 [2024-10-08 10:51:52.390478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:31.909 [2024-10-08 10:51:52.390485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.909 [2024-10-08 10:51:52.390506] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:31.909 [2024-10-08 10:51:52.391214] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:31.909 [2024-10-08 10:51:52.391341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.909 [2024-10-08 10:51:52.391400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:31.909 [2024-10-08 10:51:52.391446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.837 ms 00:20:31.909 [2024-10-08 10:51:52.391494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.909 [2024-10-08 10:51:52.394003] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:31.909 [2024-10-08 10:51:52.397224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.909 [2024-10-08 10:51:52.397279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:31.909 [2024-10-08 10:51:52.397297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.224 ms 00:20:31.909 [2024-10-08 10:51:52.397311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.909 [2024-10-08 10:51:52.397404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.909 [2024-10-08 10:51:52.397426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:31.909 [2024-10-08 10:51:52.397442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:31.909 [2024-10-08 10:51:52.397455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.909 [2024-10-08 10:51:52.403366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.909 [2024-10-08 10:51:52.403412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:31.909 [2024-10-08 10:51:52.403429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.828 ms 00:20:31.909 [2024-10-08 10:51:52.403448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.909 [2024-10-08 10:51:52.403573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.909 [2024-10-08 10:51:52.403589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:31.909 [2024-10-08 10:51:52.403603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:20:31.909 [2024-10-08 10:51:52.403617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.909 [2024-10-08 10:51:52.403686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.909 [2024-10-08 10:51:52.403703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:31.909 [2024-10-08 10:51:52.403718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:31.909 [2024-10-08 10:51:52.403731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.909 [2024-10-08 10:51:52.403775] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:31.909 [2024-10-08 10:51:52.405864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.909 [2024-10-08 10:51:52.405908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:31.909 [2024-10-08 10:51:52.405925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.099 ms 00:20:31.909 [2024-10-08 10:51:52.405938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.909 [2024-10-08 10:51:52.405986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.909 [2024-10-08 10:51:52.406000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:31.909 [2024-10-08 10:51:52.406025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:31.909 [2024-10-08 10:51:52.406035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.909 [2024-10-08 10:51:52.406070] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:31.909 [2024-10-08 10:51:52.406088] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:31.909 [2024-10-08 10:51:52.406123] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:31.909 [2024-10-08 10:51:52.406137] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:31.909 [2024-10-08 10:51:52.406239] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:31.909 [2024-10-08 10:51:52.406255] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:31.909 [2024-10-08 10:51:52.406266] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:31.909 [2024-10-08 10:51:52.406278] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:31.909 [2024-10-08 10:51:52.406287] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:31.909 [2024-10-08 10:51:52.406296] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:31.909 [2024-10-08 10:51:52.406303] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:31.909 [2024-10-08 10:51:52.406311] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:31.909 [2024-10-08 10:51:52.406318] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:31.909 [2024-10-08 10:51:52.406325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.909 [2024-10-08 10:51:52.406337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:31.909 [2024-10-08 10:51:52.406344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:20:31.909 [2024-10-08 10:51:52.406353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.909 [2024-10-08 10:51:52.406438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.909 [2024-10-08 10:51:52.406448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:31.909 [2024-10-08 10:51:52.406456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:31.909 [2024-10-08 10:51:52.406463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.909 [2024-10-08 10:51:52.406560] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:31.909 [2024-10-08 10:51:52.406570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:31.909 [2024-10-08 10:51:52.406579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:31.909 [2024-10-08 10:51:52.406591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.909 [2024-10-08 10:51:52.406599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:31.909 [2024-10-08 10:51:52.406606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:31.909 [2024-10-08 10:51:52.406614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:31.909 [2024-10-08 10:51:52.406621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:31.909 [2024-10-08 10:51:52.406634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:31.909 [2024-10-08 10:51:52.406641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:31.909 [2024-10-08 10:51:52.406650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:31.909 [2024-10-08 10:51:52.406660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:31.909 [2024-10-08 10:51:52.406667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:31.909 [2024-10-08 10:51:52.406675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:31.909 [2024-10-08 10:51:52.406682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:31.909 [2024-10-08 10:51:52.406690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.909 [2024-10-08 10:51:52.406697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:31.909 [2024-10-08 10:51:52.406705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:31.909 [2024-10-08 10:51:52.406712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.909 [2024-10-08 10:51:52.406719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:31.909 [2024-10-08 10:51:52.406727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:31.909 [2024-10-08 10:51:52.406734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:31.910 [2024-10-08 10:51:52.406742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:31.910 [2024-10-08 10:51:52.406749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:31.910 [2024-10-08 10:51:52.406756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:31.910 [2024-10-08 10:51:52.406764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:31.910 [2024-10-08 10:51:52.406771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:31.910 [2024-10-08 10:51:52.406791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:31.910 [2024-10-08 10:51:52.406810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:31.910 [2024-10-08 10:51:52.406818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:31.910 [2024-10-08 10:51:52.406826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:31.910 [2024-10-08 10:51:52.406833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:31.910 [2024-10-08 10:51:52.406841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:31.910 [2024-10-08 10:51:52.406849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:31.910 [2024-10-08 10:51:52.406856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:31.910 [2024-10-08 10:51:52.406864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:31.910 [2024-10-08 10:51:52.406871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:31.910 [2024-10-08 10:51:52.406879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:31.910 [2024-10-08 10:51:52.406886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:31.910 [2024-10-08 10:51:52.406893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.910 [2024-10-08 10:51:52.406901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:31.910 [2024-10-08 10:51:52.406908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:31.910 [2024-10-08 10:51:52.406917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.910 [2024-10-08 10:51:52.406927] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:31.910 [2024-10-08 10:51:52.406938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:31.910 [2024-10-08 10:51:52.406952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:31.910 [2024-10-08 10:51:52.406959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.910 [2024-10-08 10:51:52.406968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:31.910 [2024-10-08 10:51:52.406976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:31.910 [2024-10-08 10:51:52.406983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:31.910 [2024-10-08 10:51:52.406991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:31.910 [2024-10-08 10:51:52.406999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:31.910 [2024-10-08 10:51:52.407006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:31.910 [2024-10-08 10:51:52.407015] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:31.910 [2024-10-08 10:51:52.407024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:31.910 [2024-10-08 10:51:52.407032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:31.910 [2024-10-08 10:51:52.407040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:31.910 [2024-10-08 10:51:52.407047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:31.910 [2024-10-08 10:51:52.407053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:31.910 [2024-10-08 10:51:52.407062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:31.910 [2024-10-08 10:51:52.407069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:31.910 [2024-10-08 10:51:52.407076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:31.910 [2024-10-08 10:51:52.407083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:31.910 [2024-10-08 10:51:52.407090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:31.910 [2024-10-08 10:51:52.407097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:31.910 [2024-10-08 10:51:52.407103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:31.910 [2024-10-08 10:51:52.407111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:31.910 [2024-10-08 10:51:52.407117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:31.910 [2024-10-08 10:51:52.407125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:31.910 [2024-10-08 10:51:52.407132] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:31.910 [2024-10-08 10:51:52.407139] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:31.910 [2024-10-08 10:51:52.407147] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:31.910 [2024-10-08 10:51:52.407154] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:31.910 [2024-10-08 10:51:52.407162] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:31.910 [2024-10-08 10:51:52.407169] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:31.910 [2024-10-08 10:51:52.407179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.910 [2024-10-08 10:51:52.407186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:31.910 [2024-10-08 10:51:52.407193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.685 ms 00:20:31.910 [2024-10-08 10:51:52.407200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.910 [2024-10-08 10:51:52.424900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.910 [2024-10-08 10:51:52.424942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:31.910 [2024-10-08 10:51:52.424956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.656 ms 00:20:31.910 [2024-10-08 10:51:52.424970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.910 [2024-10-08 10:51:52.425061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.910 [2024-10-08 10:51:52.425069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:31.910 [2024-10-08 10:51:52.425077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:20:31.910 [2024-10-08 10:51:52.425084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.910 [2024-10-08 10:51:52.433310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.910 [2024-10-08 10:51:52.433344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:31.910 [2024-10-08 10:51:52.433355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.172 ms 00:20:31.910 [2024-10-08 10:51:52.433370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.910 [2024-10-08 10:51:52.433402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.910 [2024-10-08 10:51:52.433411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:31.910 [2024-10-08 10:51:52.433424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:31.910 [2024-10-08 10:51:52.433436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.910 [2024-10-08 10:51:52.433810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.910 [2024-10-08 10:51:52.433833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:31.910 [2024-10-08 10:51:52.433843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:20:31.910 [2024-10-08 10:51:52.433856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.910 [2024-10-08 10:51:52.433991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.910 [2024-10-08 10:51:52.434002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:31.910 [2024-10-08 10:51:52.434015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:20:31.910 [2024-10-08 10:51:52.434024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.910 [2024-10-08 10:51:52.438660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.910 [2024-10-08 10:51:52.438703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:31.910 [2024-10-08 10:51:52.438713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.613 ms 00:20:31.910 [2024-10-08 10:51:52.438722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.910 [2024-10-08 10:51:52.441716] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 3, empty chunks = 1 00:20:31.910 [2024-10-08 10:51:52.441754] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:31.910 [2024-10-08 10:51:52.441765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.910 [2024-10-08 10:51:52.441772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:31.910 [2024-10-08 10:51:52.441780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.931 ms 00:20:31.910 [2024-10-08 10:51:52.441804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.910 [2024-10-08 10:51:52.456386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.910 [2024-10-08 10:51:52.456433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:31.910 [2024-10-08 10:51:52.456447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.541 ms 00:20:31.910 [2024-10-08 10:51:52.456456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.910 [2024-10-08 10:51:52.458659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.910 [2024-10-08 10:51:52.458691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:31.910 [2024-10-08 10:51:52.458700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.164 ms 00:20:31.910 [2024-10-08 10:51:52.458707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.910 [2024-10-08 10:51:52.460383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.910 [2024-10-08 10:51:52.460412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:31.911 [2024-10-08 10:51:52.460420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.645 ms 00:20:31.911 [2024-10-08 10:51:52.460427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.911 [2024-10-08 10:51:52.460738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.911 [2024-10-08 10:51:52.460754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:31.911 [2024-10-08 10:51:52.460763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:20:31.911 [2024-10-08 10:51:52.460770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.911 [2024-10-08 10:51:52.476174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.911 [2024-10-08 10:51:52.476232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:31.911 [2024-10-08 10:51:52.476245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.387 ms 00:20:31.911 [2024-10-08 10:51:52.476253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.172 [2024-10-08 10:51:52.483669] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:32.172 [2024-10-08 10:51:52.486071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.172 [2024-10-08 10:51:52.486096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:32.172 [2024-10-08 10:51:52.486111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.775 ms 00:20:32.172 [2024-10-08 10:51:52.486119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.172 [2024-10-08 10:51:52.486171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.172 [2024-10-08 10:51:52.486185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:32.172 [2024-10-08 10:51:52.486194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:32.172 [2024-10-08 10:51:52.486203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.172 [2024-10-08 10:51:52.486759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.172 [2024-10-08 10:51:52.486791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:32.172 [2024-10-08 10:51:52.486819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.501 ms 00:20:32.172 [2024-10-08 10:51:52.486831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.172 [2024-10-08 10:51:52.486860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.172 [2024-10-08 10:51:52.486869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:32.172 [2024-10-08 10:51:52.486879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:32.172 [2024-10-08 10:51:52.486890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.172 [2024-10-08 10:51:52.486920] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:32.172 [2024-10-08 10:51:52.486930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.172 [2024-10-08 10:51:52.486937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:32.172 [2024-10-08 10:51:52.486945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:32.172 [2024-10-08 10:51:52.486955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.172 [2024-10-08 10:51:52.491011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.172 [2024-10-08 10:51:52.491047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:32.172 [2024-10-08 10:51:52.491057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.039 ms 00:20:32.172 [2024-10-08 10:51:52.491066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.172 [2024-10-08 10:51:52.491135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.172 [2024-10-08 10:51:52.491149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:32.172 [2024-10-08 10:51:52.491158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:32.172 [2024-10-08 10:51:52.491166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.172 [2024-10-08 10:51:52.492200] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 101.455 ms, result 0 00:20:33.115  [2024-10-08T10:51:55.134Z] Copying: 952/1048576 [kB] (952 kBps) [2024-10-08T10:51:55.706Z] Copying: 19/1024 [MB] (18 MBps) [2024-10-08T10:51:57.092Z] Copying: 33/1024 [MB] (14 MBps) [2024-10-08T10:51:58.035Z] Copying: 48/1024 [MB] (14 MBps) [2024-10-08T10:51:58.977Z] Copying: 67/1024 [MB] (18 MBps) [2024-10-08T10:51:59.935Z] Copying: 88/1024 [MB] (21 MBps) [2024-10-08T10:52:00.888Z] Copying: 103/1024 [MB] (15 MBps) [2024-10-08T10:52:01.832Z] Copying: 117/1024 [MB] (13 MBps) [2024-10-08T10:52:02.775Z] Copying: 128/1024 [MB] (10 MBps) [2024-10-08T10:52:03.719Z] Copying: 138/1024 [MB] (10 MBps) [2024-10-08T10:52:05.105Z] Copying: 149/1024 [MB] (11 MBps) [2024-10-08T10:52:05.678Z] Copying: 164/1024 [MB] (14 MBps) [2024-10-08T10:52:07.063Z] Copying: 181/1024 [MB] (17 MBps) [2024-10-08T10:52:08.029Z] Copying: 206/1024 [MB] (24 MBps) [2024-10-08T10:52:08.971Z] Copying: 220/1024 [MB] (14 MBps) [2024-10-08T10:52:09.914Z] Copying: 235856/1048576 [kB] (9560 kBps) [2024-10-08T10:52:10.856Z] Copying: 241/1024 [MB] (10 MBps) [2024-10-08T10:52:11.800Z] Copying: 252/1024 [MB] (10 MBps) [2024-10-08T10:52:12.745Z] Copying: 262/1024 [MB] (10 MBps) [2024-10-08T10:52:13.688Z] Copying: 278744/1048576 [kB] (10148 kBps) [2024-10-08T10:52:15.077Z] Copying: 288472/1048576 [kB] (9728 kBps) [2024-10-08T10:52:16.037Z] Copying: 291/1024 [MB] (10 MBps) [2024-10-08T10:52:16.979Z] Copying: 309056/1048576 [kB] (10080 kBps) [2024-10-08T10:52:17.920Z] Copying: 319248/1048576 [kB] (10192 kBps) [2024-10-08T10:52:18.865Z] Copying: 322/1024 [MB] (10 MBps) [2024-10-08T10:52:19.808Z] Copying: 332/1024 [MB] (10 MBps) [2024-10-08T10:52:20.748Z] Copying: 350452/1048576 [kB] (10100 kBps) [2024-10-08T10:52:21.689Z] Copying: 352/1024 [MB] (10 MBps) [2024-10-08T10:52:23.073Z] Copying: 362/1024 [MB] (10 MBps) [2024-10-08T10:52:24.017Z] Copying: 372/1024 [MB] (10 MBps) [2024-10-08T10:52:24.961Z] Copying: 391708/1048576 [kB] (10100 kBps) [2024-10-08T10:52:25.903Z] Copying: 401188/1048576 [kB] (9480 kBps) [2024-10-08T10:52:26.843Z] Copying: 402/1024 [MB] (10 MBps) [2024-10-08T10:52:27.781Z] Copying: 421964/1048576 [kB] (9972 kBps) [2024-10-08T10:52:28.721Z] Copying: 422/1024 [MB] (10 MBps) [2024-10-08T10:52:30.106Z] Copying: 432/1024 [MB] (10 MBps) [2024-10-08T10:52:30.706Z] Copying: 443/1024 [MB] (10 MBps) [2024-10-08T10:52:32.094Z] Copying: 453/1024 [MB] (10 MBps) [2024-10-08T10:52:33.038Z] Copying: 474040/1048576 [kB] (9944 kBps) [2024-10-08T10:52:33.982Z] Copying: 484280/1048576 [kB] (10240 kBps) [2024-10-08T10:52:34.965Z] Copying: 494516/1048576 [kB] (10236 kBps) [2024-10-08T10:52:35.910Z] Copying: 493/1024 [MB] (10 MBps) [2024-10-08T10:52:36.853Z] Copying: 503/1024 [MB] (10 MBps) [2024-10-08T10:52:37.797Z] Copying: 515/1024 [MB] (11 MBps) [2024-10-08T10:52:38.740Z] Copying: 527/1024 [MB] (12 MBps) [2024-10-08T10:52:39.684Z] Copying: 539/1024 [MB] (11 MBps) [2024-10-08T10:52:41.069Z] Copying: 550/1024 [MB] (11 MBps) [2024-10-08T10:52:42.013Z] Copying: 561/1024 [MB] (11 MBps) [2024-10-08T10:52:42.957Z] Copying: 573/1024 [MB] (11 MBps) [2024-10-08T10:52:43.899Z] Copying: 585/1024 [MB] (11 MBps) [2024-10-08T10:52:44.842Z] Copying: 596/1024 [MB] (11 MBps) [2024-10-08T10:52:45.787Z] Copying: 607/1024 [MB] (11 MBps) [2024-10-08T10:52:46.729Z] Copying: 618/1024 [MB] (11 MBps) [2024-10-08T10:52:48.116Z] Copying: 629/1024 [MB] (10 MBps) [2024-10-08T10:52:48.689Z] Copying: 640/1024 [MB] (11 MBps) [2024-10-08T10:52:50.074Z] Copying: 651/1024 [MB] (11 MBps) [2024-10-08T10:52:51.014Z] Copying: 664/1024 [MB] (13 MBps) [2024-10-08T10:52:51.985Z] Copying: 675/1024 [MB] (11 MBps) [2024-10-08T10:52:52.928Z] Copying: 687/1024 [MB] (11 MBps) [2024-10-08T10:52:53.869Z] Copying: 698/1024 [MB] (10 MBps) [2024-10-08T10:52:54.811Z] Copying: 708/1024 [MB] (10 MBps) [2024-10-08T10:52:55.754Z] Copying: 720/1024 [MB] (11 MBps) [2024-10-08T10:52:56.692Z] Copying: 731/1024 [MB] (10 MBps) [2024-10-08T10:52:58.071Z] Copying: 757/1024 [MB] (26 MBps) [2024-10-08T10:52:59.014Z] Copying: 779/1024 [MB] (22 MBps) [2024-10-08T10:52:59.968Z] Copying: 802/1024 [MB] (22 MBps) [2024-10-08T10:53:00.912Z] Copying: 819/1024 [MB] (16 MBps) [2024-10-08T10:53:01.853Z] Copying: 832/1024 [MB] (12 MBps) [2024-10-08T10:53:02.801Z] Copying: 845/1024 [MB] (12 MBps) [2024-10-08T10:53:03.743Z] Copying: 860/1024 [MB] (15 MBps) [2024-10-08T10:53:04.680Z] Copying: 873/1024 [MB] (12 MBps) [2024-10-08T10:53:06.061Z] Copying: 897/1024 [MB] (24 MBps) [2024-10-08T10:53:07.002Z] Copying: 926/1024 [MB] (29 MBps) [2024-10-08T10:53:07.979Z] Copying: 950/1024 [MB] (23 MBps) [2024-10-08T10:53:08.944Z] Copying: 971/1024 [MB] (21 MBps) [2024-10-08T10:53:09.886Z] Copying: 986/1024 [MB] (14 MBps) [2024-10-08T10:53:10.828Z] Copying: 1003/1024 [MB] (16 MBps) [2024-10-08T10:53:11.400Z] Copying: 1018/1024 [MB] (15 MBps) [2024-10-08T10:53:11.400Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-10-08 10:53:11.357514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.823 [2024-10-08 10:53:11.357613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:50.823 [2024-10-08 10:53:11.357628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:50.823 [2024-10-08 10:53:11.357641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.823 [2024-10-08 10:53:11.357664] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:50.823 [2024-10-08 10:53:11.358144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.823 [2024-10-08 10:53:11.358162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:50.823 [2024-10-08 10:53:11.358172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.462 ms 00:21:50.823 [2024-10-08 10:53:11.358181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.823 [2024-10-08 10:53:11.358456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.823 [2024-10-08 10:53:11.358479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:50.823 [2024-10-08 10:53:11.358491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:21:50.823 [2024-10-08 10:53:11.358501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.823 [2024-10-08 10:53:11.372069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.823 [2024-10-08 10:53:11.372115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:50.823 [2024-10-08 10:53:11.372162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.543 ms 00:21:50.823 [2024-10-08 10:53:11.372170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.823 [2024-10-08 10:53:11.378324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.823 [2024-10-08 10:53:11.378354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:50.823 [2024-10-08 10:53:11.378365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.127 ms 00:21:50.823 [2024-10-08 10:53:11.378373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.823 [2024-10-08 10:53:11.380578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.823 [2024-10-08 10:53:11.380604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:50.823 [2024-10-08 10:53:11.380613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.168 ms 00:21:50.823 [2024-10-08 10:53:11.380620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:50.823 [2024-10-08 10:53:11.385039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:50.823 [2024-10-08 10:53:11.385067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:50.823 [2024-10-08 10:53:11.385076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.388 ms 00:21:50.823 [2024-10-08 10:53:11.385083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.396 [2024-10-08 10:53:11.725004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.396 [2024-10-08 10:53:11.725099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:51.396 [2024-10-08 10:53:11.725112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 339.878 ms 00:21:51.396 [2024-10-08 10:53:11.725119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.396 [2024-10-08 10:53:11.727333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.397 [2024-10-08 10:53:11.727366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:51.397 [2024-10-08 10:53:11.727376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.194 ms 00:21:51.397 [2024-10-08 10:53:11.727383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.397 [2024-10-08 10:53:11.728953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.397 [2024-10-08 10:53:11.728982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:51.397 [2024-10-08 10:53:11.728991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.541 ms 00:21:51.397 [2024-10-08 10:53:11.728997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.397 [2024-10-08 10:53:11.730224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.397 [2024-10-08 10:53:11.730254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:51.397 [2024-10-08 10:53:11.730272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.198 ms 00:21:51.397 [2024-10-08 10:53:11.730279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.397 [2024-10-08 10:53:11.731488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.397 [2024-10-08 10:53:11.731517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:51.397 [2024-10-08 10:53:11.731525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.159 ms 00:21:51.397 [2024-10-08 10:53:11.731531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.397 [2024-10-08 10:53:11.731556] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:51.397 [2024-10-08 10:53:11.731570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131840 / 261120 wr_cnt: 1 state: open 00:21:51.397 [2024-10-08 10:53:11.731580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.731936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.732034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.732042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.732050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.732057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.732064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.732073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.732080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.732087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.732094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.732102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.732109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.732117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.732125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.732132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.732140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.732147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.732154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.732162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.732169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.732176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.732183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.732190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.732198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.732206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.732213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.732220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.732227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.732234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.732241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.732251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.732259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.732266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:51.397 [2024-10-08 10:53:11.732273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:51.398 [2024-10-08 10:53:11.732281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:51.398 [2024-10-08 10:53:11.732288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:51.398 [2024-10-08 10:53:11.732296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:51.398 [2024-10-08 10:53:11.732303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:51.398 [2024-10-08 10:53:11.732310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:51.398 [2024-10-08 10:53:11.732317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:51.398 [2024-10-08 10:53:11.732324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:51.398 [2024-10-08 10:53:11.732332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:51.398 [2024-10-08 10:53:11.732339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:51.398 [2024-10-08 10:53:11.732346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:51.398 [2024-10-08 10:53:11.732354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:51.398 [2024-10-08 10:53:11.732362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:51.398 [2024-10-08 10:53:11.732370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:51.398 [2024-10-08 10:53:11.732377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:51.398 [2024-10-08 10:53:11.732384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:51.398 [2024-10-08 10:53:11.732391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:51.398 [2024-10-08 10:53:11.732399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:51.398 [2024-10-08 10:53:11.732406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:51.398 [2024-10-08 10:53:11.732422] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:51.398 [2024-10-08 10:53:11.732430] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 50a8d519-5012-4166-b574-4fc22bad979e 00:21:51.398 [2024-10-08 10:53:11.732437] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131840 00:21:51.398 [2024-10-08 10:53:11.732444] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 132288 00:21:51.398 [2024-10-08 10:53:11.732452] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 131328 00:21:51.398 [2024-10-08 10:53:11.732466] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0073 00:21:51.398 [2024-10-08 10:53:11.732477] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:51.398 [2024-10-08 10:53:11.732484] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:51.398 [2024-10-08 10:53:11.732491] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:51.398 [2024-10-08 10:53:11.732498] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:51.398 [2024-10-08 10:53:11.732504] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:51.398 [2024-10-08 10:53:11.732511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.398 [2024-10-08 10:53:11.732523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:51.398 [2024-10-08 10:53:11.732531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.956 ms 00:21:51.398 [2024-10-08 10:53:11.732538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.398 [2024-10-08 10:53:11.733979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.398 [2024-10-08 10:53:11.734008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:51.398 [2024-10-08 10:53:11.734017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.423 ms 00:21:51.398 [2024-10-08 10:53:11.734024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.398 [2024-10-08 10:53:11.734114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.398 [2024-10-08 10:53:11.734123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:51.398 [2024-10-08 10:53:11.734142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:21:51.398 [2024-10-08 10:53:11.734149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.398 [2024-10-08 10:53:11.738406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.398 [2024-10-08 10:53:11.738436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:51.398 [2024-10-08 10:53:11.738445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.398 [2024-10-08 10:53:11.738452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.398 [2024-10-08 10:53:11.738501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.398 [2024-10-08 10:53:11.738510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:51.398 [2024-10-08 10:53:11.738517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.398 [2024-10-08 10:53:11.738531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.398 [2024-10-08 10:53:11.738567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.398 [2024-10-08 10:53:11.738577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:51.398 [2024-10-08 10:53:11.738585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.398 [2024-10-08 10:53:11.738591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.398 [2024-10-08 10:53:11.738606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.398 [2024-10-08 10:53:11.738619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:51.398 [2024-10-08 10:53:11.738627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.398 [2024-10-08 10:53:11.738634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.398 [2024-10-08 10:53:11.747277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.398 [2024-10-08 10:53:11.747324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:51.398 [2024-10-08 10:53:11.747334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.398 [2024-10-08 10:53:11.747342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.398 [2024-10-08 10:53:11.754249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.398 [2024-10-08 10:53:11.754290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:51.398 [2024-10-08 10:53:11.754300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.398 [2024-10-08 10:53:11.754308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.398 [2024-10-08 10:53:11.754351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.398 [2024-10-08 10:53:11.754365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:51.398 [2024-10-08 10:53:11.754373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.398 [2024-10-08 10:53:11.754380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.398 [2024-10-08 10:53:11.754403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.398 [2024-10-08 10:53:11.754415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:51.398 [2024-10-08 10:53:11.754423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.398 [2024-10-08 10:53:11.754430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.398 [2024-10-08 10:53:11.754488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.398 [2024-10-08 10:53:11.754497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:51.398 [2024-10-08 10:53:11.754507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.398 [2024-10-08 10:53:11.754514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.398 [2024-10-08 10:53:11.754541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.398 [2024-10-08 10:53:11.754554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:51.398 [2024-10-08 10:53:11.754562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.398 [2024-10-08 10:53:11.754569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.398 [2024-10-08 10:53:11.754604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.398 [2024-10-08 10:53:11.754612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:51.398 [2024-10-08 10:53:11.754619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.398 [2024-10-08 10:53:11.754628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.398 [2024-10-08 10:53:11.754666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.398 [2024-10-08 10:53:11.754674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:51.398 [2024-10-08 10:53:11.754682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.398 [2024-10-08 10:53:11.754689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.398 [2024-10-08 10:53:11.754814] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 397.261 ms, result 0 00:21:51.398 00:21:51.398 00:21:51.398 10:53:11 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:53.946 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:53.946 10:53:14 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:21:53.946 10:53:14 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:21:53.946 10:53:14 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:53.946 10:53:14 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:53.946 10:53:14 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:53.946 10:53:14 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 87300 00:21:53.946 10:53:14 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 87300 ']' 00:21:53.946 Process with pid 87300 is not found 00:21:53.946 10:53:14 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 87300 00:21:53.946 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (87300) - No such process 00:21:53.946 10:53:14 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 87300 is not found' 00:21:53.946 Remove shared memory files 00:21:53.946 10:53:14 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:21:53.946 10:53:14 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:53.946 10:53:14 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:21:53.946 10:53:14 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:21:53.946 10:53:14 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:21:53.946 10:53:14 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:53.946 10:53:14 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:21:53.946 00:21:53.946 real 5m15.089s 00:21:53.946 user 5m3.083s 00:21:53.946 sys 0m11.610s 00:21:53.946 ************************************ 00:21:53.946 END TEST ftl_restore 00:21:53.946 ************************************ 00:21:53.946 10:53:14 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:53.946 10:53:14 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:21:53.946 10:53:14 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:21:53.946 10:53:14 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:53.946 10:53:14 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:53.946 10:53:14 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:53.946 ************************************ 00:21:53.946 START TEST ftl_dirty_shutdown 00:21:53.946 ************************************ 00:21:53.946 10:53:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:21:53.946 * Looking for test storage... 00:21:53.946 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:53.946 10:53:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:53.946 10:53:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:21:53.946 10:53:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:53.946 10:53:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:53.946 10:53:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:53.946 10:53:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:53.946 10:53:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:53.946 10:53:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:53.946 10:53:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:53.946 10:53:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:53.946 10:53:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:53.946 10:53:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:53.946 10:53:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:53.946 10:53:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:53.946 10:53:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:53.946 10:53:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:53.946 10:53:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:21:53.946 10:53:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:53.946 10:53:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:53.946 10:53:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:53.946 10:53:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:53.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.947 --rc genhtml_branch_coverage=1 00:21:53.947 --rc genhtml_function_coverage=1 00:21:53.947 --rc genhtml_legend=1 00:21:53.947 --rc geninfo_all_blocks=1 00:21:53.947 --rc geninfo_unexecuted_blocks=1 00:21:53.947 00:21:53.947 ' 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:53.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.947 --rc genhtml_branch_coverage=1 00:21:53.947 --rc genhtml_function_coverage=1 00:21:53.947 --rc genhtml_legend=1 00:21:53.947 --rc geninfo_all_blocks=1 00:21:53.947 --rc geninfo_unexecuted_blocks=1 00:21:53.947 00:21:53.947 ' 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:53.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.947 --rc genhtml_branch_coverage=1 00:21:53.947 --rc genhtml_function_coverage=1 00:21:53.947 --rc genhtml_legend=1 00:21:53.947 --rc geninfo_all_blocks=1 00:21:53.947 --rc geninfo_unexecuted_blocks=1 00:21:53.947 00:21:53.947 ' 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:53.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.947 --rc genhtml_branch_coverage=1 00:21:53.947 --rc genhtml_function_coverage=1 00:21:53.947 --rc genhtml_legend=1 00:21:53.947 --rc geninfo_all_blocks=1 00:21:53.947 --rc geninfo_unexecuted_blocks=1 00:21:53.947 00:21:53.947 ' 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=90638 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 90638 00:21:53.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 90638 ']' 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:53.947 10:53:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:54.207 [2024-10-08 10:53:14.526665] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:21:54.207 [2024-10-08 10:53:14.526781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90638 ] 00:21:54.207 [2024-10-08 10:53:14.655066] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:54.207 [2024-10-08 10:53:14.667752] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.207 [2024-10-08 10:53:14.700503] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.801 10:53:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:54.801 10:53:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:21:54.801 10:53:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:54.801 10:53:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:21:54.801 10:53:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:54.801 10:53:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:21:54.801 10:53:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:21:54.801 10:53:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:55.371 10:53:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:55.371 10:53:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:21:55.371 10:53:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:55.371 10:53:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:21:55.371 10:53:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:55.371 10:53:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:21:55.371 10:53:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:21:55.371 10:53:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:55.371 10:53:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:55.371 { 00:21:55.371 "name": "nvme0n1", 00:21:55.371 "aliases": [ 00:21:55.371 "779fae0e-6849-4256-bf38-791129f7fc00" 00:21:55.371 ], 00:21:55.371 "product_name": "NVMe disk", 00:21:55.371 "block_size": 4096, 00:21:55.371 "num_blocks": 1310720, 00:21:55.371 "uuid": "779fae0e-6849-4256-bf38-791129f7fc00", 00:21:55.371 "numa_id": -1, 00:21:55.371 "assigned_rate_limits": { 00:21:55.371 "rw_ios_per_sec": 0, 00:21:55.371 "rw_mbytes_per_sec": 0, 00:21:55.371 "r_mbytes_per_sec": 0, 00:21:55.371 "w_mbytes_per_sec": 0 00:21:55.371 }, 00:21:55.371 "claimed": true, 00:21:55.371 "claim_type": "read_many_write_one", 00:21:55.371 "zoned": false, 00:21:55.371 "supported_io_types": { 00:21:55.371 "read": true, 00:21:55.371 "write": true, 00:21:55.371 "unmap": true, 00:21:55.371 "flush": true, 00:21:55.371 "reset": true, 00:21:55.371 "nvme_admin": true, 00:21:55.371 "nvme_io": true, 00:21:55.371 "nvme_io_md": false, 00:21:55.371 "write_zeroes": true, 00:21:55.371 "zcopy": false, 00:21:55.371 "get_zone_info": false, 00:21:55.371 "zone_management": false, 00:21:55.371 "zone_append": false, 00:21:55.371 "compare": true, 00:21:55.371 "compare_and_write": false, 00:21:55.371 "abort": true, 00:21:55.371 "seek_hole": false, 00:21:55.371 "seek_data": false, 00:21:55.371 "copy": true, 00:21:55.371 "nvme_iov_md": false 00:21:55.371 }, 00:21:55.371 "driver_specific": { 00:21:55.371 "nvme": [ 00:21:55.371 { 00:21:55.371 "pci_address": "0000:00:11.0", 00:21:55.371 "trid": { 00:21:55.371 "trtype": "PCIe", 00:21:55.371 "traddr": "0000:00:11.0" 00:21:55.371 }, 00:21:55.371 "ctrlr_data": { 00:21:55.371 "cntlid": 0, 00:21:55.371 "vendor_id": "0x1b36", 00:21:55.371 "model_number": "QEMU NVMe Ctrl", 00:21:55.371 "serial_number": "12341", 00:21:55.371 "firmware_revision": "8.0.0", 00:21:55.371 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:55.371 "oacs": { 00:21:55.371 "security": 0, 00:21:55.371 "format": 1, 00:21:55.371 "firmware": 0, 00:21:55.371 "ns_manage": 1 00:21:55.371 }, 00:21:55.371 "multi_ctrlr": false, 00:21:55.371 "ana_reporting": false 00:21:55.371 }, 00:21:55.371 "vs": { 00:21:55.371 "nvme_version": "1.4" 00:21:55.371 }, 00:21:55.371 "ns_data": { 00:21:55.371 "id": 1, 00:21:55.371 "can_share": false 00:21:55.371 } 00:21:55.371 } 00:21:55.371 ], 00:21:55.371 "mp_policy": "active_passive" 00:21:55.371 } 00:21:55.371 } 00:21:55.371 ]' 00:21:55.371 10:53:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:55.371 10:53:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:21:55.371 10:53:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:55.371 10:53:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:21:55.371 10:53:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:21:55.371 10:53:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:21:55.371 10:53:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:21:55.371 10:53:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:55.371 10:53:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:21:55.371 10:53:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:55.371 10:53:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:55.631 10:53:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=97825d88-c263-4c91-9609-5952f05ac1a2 00:21:55.631 10:53:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:21:55.631 10:53:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 97825d88-c263-4c91-9609-5952f05ac1a2 00:21:55.891 10:53:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:56.151 10:53:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=51f41746-3a64-4a6b-9203-14da0e553293 00:21:56.151 10:53:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 51f41746-3a64-4a6b-9203-14da0e553293 00:21:56.410 10:53:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=7d43937c-7229-4a88-94a0-c6781d9a2d6e 00:21:56.410 10:53:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:21:56.410 10:53:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7d43937c-7229-4a88-94a0-c6781d9a2d6e 00:21:56.410 10:53:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:21:56.410 10:53:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:56.410 10:53:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=7d43937c-7229-4a88-94a0-c6781d9a2d6e 00:21:56.410 10:53:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:21:56.410 10:53:16 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 7d43937c-7229-4a88-94a0-c6781d9a2d6e 00:21:56.410 10:53:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=7d43937c-7229-4a88-94a0-c6781d9a2d6e 00:21:56.410 10:53:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:56.410 10:53:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:21:56.410 10:53:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:21:56.410 10:53:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7d43937c-7229-4a88-94a0-c6781d9a2d6e 00:21:56.410 10:53:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:56.410 { 00:21:56.410 "name": "7d43937c-7229-4a88-94a0-c6781d9a2d6e", 00:21:56.410 "aliases": [ 00:21:56.410 "lvs/nvme0n1p0" 00:21:56.410 ], 00:21:56.410 "product_name": "Logical Volume", 00:21:56.410 "block_size": 4096, 00:21:56.410 "num_blocks": 26476544, 00:21:56.410 "uuid": "7d43937c-7229-4a88-94a0-c6781d9a2d6e", 00:21:56.410 "assigned_rate_limits": { 00:21:56.410 "rw_ios_per_sec": 0, 00:21:56.410 "rw_mbytes_per_sec": 0, 00:21:56.410 "r_mbytes_per_sec": 0, 00:21:56.410 "w_mbytes_per_sec": 0 00:21:56.410 }, 00:21:56.410 "claimed": false, 00:21:56.410 "zoned": false, 00:21:56.410 "supported_io_types": { 00:21:56.410 "read": true, 00:21:56.410 "write": true, 00:21:56.410 "unmap": true, 00:21:56.410 "flush": false, 00:21:56.410 "reset": true, 00:21:56.410 "nvme_admin": false, 00:21:56.410 "nvme_io": false, 00:21:56.410 "nvme_io_md": false, 00:21:56.410 "write_zeroes": true, 00:21:56.410 "zcopy": false, 00:21:56.410 "get_zone_info": false, 00:21:56.410 "zone_management": false, 00:21:56.410 "zone_append": false, 00:21:56.410 "compare": false, 00:21:56.410 "compare_and_write": false, 00:21:56.410 "abort": false, 00:21:56.410 "seek_hole": true, 00:21:56.410 "seek_data": true, 00:21:56.410 "copy": false, 00:21:56.410 "nvme_iov_md": false 00:21:56.410 }, 00:21:56.410 "driver_specific": { 00:21:56.410 "lvol": { 00:21:56.410 "lvol_store_uuid": "51f41746-3a64-4a6b-9203-14da0e553293", 00:21:56.410 "base_bdev": "nvme0n1", 00:21:56.410 "thin_provision": true, 00:21:56.410 "num_allocated_clusters": 0, 00:21:56.410 "snapshot": false, 00:21:56.410 "clone": false, 00:21:56.410 "esnap_clone": false 00:21:56.410 } 00:21:56.410 } 00:21:56.410 } 00:21:56.410 ]' 00:21:56.410 10:53:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:56.668 10:53:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:21:56.668 10:53:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:56.668 10:53:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:56.668 10:53:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:56.668 10:53:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:21:56.668 10:53:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:21:56.668 10:53:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:21:56.668 10:53:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:56.926 10:53:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:56.926 10:53:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:56.926 10:53:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 7d43937c-7229-4a88-94a0-c6781d9a2d6e 00:21:56.927 10:53:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=7d43937c-7229-4a88-94a0-c6781d9a2d6e 00:21:56.927 10:53:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:56.927 10:53:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:21:56.927 10:53:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:21:56.927 10:53:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7d43937c-7229-4a88-94a0-c6781d9a2d6e 00:21:56.927 10:53:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:56.927 { 00:21:56.927 "name": "7d43937c-7229-4a88-94a0-c6781d9a2d6e", 00:21:56.927 "aliases": [ 00:21:56.927 "lvs/nvme0n1p0" 00:21:56.927 ], 00:21:56.927 "product_name": "Logical Volume", 00:21:56.927 "block_size": 4096, 00:21:56.927 "num_blocks": 26476544, 00:21:56.927 "uuid": "7d43937c-7229-4a88-94a0-c6781d9a2d6e", 00:21:56.927 "assigned_rate_limits": { 00:21:56.927 "rw_ios_per_sec": 0, 00:21:56.927 "rw_mbytes_per_sec": 0, 00:21:56.927 "r_mbytes_per_sec": 0, 00:21:56.927 "w_mbytes_per_sec": 0 00:21:56.927 }, 00:21:56.927 "claimed": false, 00:21:56.927 "zoned": false, 00:21:56.927 "supported_io_types": { 00:21:56.927 "read": true, 00:21:56.927 "write": true, 00:21:56.927 "unmap": true, 00:21:56.927 "flush": false, 00:21:56.927 "reset": true, 00:21:56.927 "nvme_admin": false, 00:21:56.927 "nvme_io": false, 00:21:56.927 "nvme_io_md": false, 00:21:56.927 "write_zeroes": true, 00:21:56.927 "zcopy": false, 00:21:56.927 "get_zone_info": false, 00:21:56.927 "zone_management": false, 00:21:56.927 "zone_append": false, 00:21:56.927 "compare": false, 00:21:56.927 "compare_and_write": false, 00:21:56.927 "abort": false, 00:21:56.927 "seek_hole": true, 00:21:56.927 "seek_data": true, 00:21:56.927 "copy": false, 00:21:56.927 "nvme_iov_md": false 00:21:56.927 }, 00:21:56.927 "driver_specific": { 00:21:56.927 "lvol": { 00:21:56.927 "lvol_store_uuid": "51f41746-3a64-4a6b-9203-14da0e553293", 00:21:56.927 "base_bdev": "nvme0n1", 00:21:56.927 "thin_provision": true, 00:21:56.927 "num_allocated_clusters": 0, 00:21:56.927 "snapshot": false, 00:21:56.927 "clone": false, 00:21:56.927 "esnap_clone": false 00:21:56.927 } 00:21:56.927 } 00:21:56.927 } 00:21:56.927 ]' 00:21:56.927 10:53:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:57.185 10:53:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:21:57.185 10:53:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:57.185 10:53:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:57.185 10:53:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:57.185 10:53:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:21:57.185 10:53:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:21:57.185 10:53:17 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:57.185 10:53:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:21:57.185 10:53:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 7d43937c-7229-4a88-94a0-c6781d9a2d6e 00:21:57.185 10:53:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=7d43937c-7229-4a88-94a0-c6781d9a2d6e 00:21:57.185 10:53:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:57.185 10:53:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:21:57.185 10:53:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:21:57.185 10:53:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7d43937c-7229-4a88-94a0-c6781d9a2d6e 00:21:57.443 10:53:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:57.443 { 00:21:57.443 "name": "7d43937c-7229-4a88-94a0-c6781d9a2d6e", 00:21:57.443 "aliases": [ 00:21:57.443 "lvs/nvme0n1p0" 00:21:57.443 ], 00:21:57.443 "product_name": "Logical Volume", 00:21:57.443 "block_size": 4096, 00:21:57.443 "num_blocks": 26476544, 00:21:57.443 "uuid": "7d43937c-7229-4a88-94a0-c6781d9a2d6e", 00:21:57.443 "assigned_rate_limits": { 00:21:57.443 "rw_ios_per_sec": 0, 00:21:57.443 "rw_mbytes_per_sec": 0, 00:21:57.443 "r_mbytes_per_sec": 0, 00:21:57.443 "w_mbytes_per_sec": 0 00:21:57.443 }, 00:21:57.443 "claimed": false, 00:21:57.443 "zoned": false, 00:21:57.443 "supported_io_types": { 00:21:57.443 "read": true, 00:21:57.443 "write": true, 00:21:57.443 "unmap": true, 00:21:57.443 "flush": false, 00:21:57.443 "reset": true, 00:21:57.443 "nvme_admin": false, 00:21:57.443 "nvme_io": false, 00:21:57.443 "nvme_io_md": false, 00:21:57.443 "write_zeroes": true, 00:21:57.443 "zcopy": false, 00:21:57.443 "get_zone_info": false, 00:21:57.443 "zone_management": false, 00:21:57.443 "zone_append": false, 00:21:57.443 "compare": false, 00:21:57.443 "compare_and_write": false, 00:21:57.443 "abort": false, 00:21:57.443 "seek_hole": true, 00:21:57.443 "seek_data": true, 00:21:57.443 "copy": false, 00:21:57.443 "nvme_iov_md": false 00:21:57.443 }, 00:21:57.443 "driver_specific": { 00:21:57.444 "lvol": { 00:21:57.444 "lvol_store_uuid": "51f41746-3a64-4a6b-9203-14da0e553293", 00:21:57.444 "base_bdev": "nvme0n1", 00:21:57.444 "thin_provision": true, 00:21:57.444 "num_allocated_clusters": 0, 00:21:57.444 "snapshot": false, 00:21:57.444 "clone": false, 00:21:57.444 "esnap_clone": false 00:21:57.444 } 00:21:57.444 } 00:21:57.444 } 00:21:57.444 ]' 00:21:57.444 10:53:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:57.444 10:53:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:21:57.444 10:53:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:57.444 10:53:17 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:57.444 10:53:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:57.444 10:53:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:21:57.444 10:53:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:21:57.444 10:53:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 7d43937c-7229-4a88-94a0-c6781d9a2d6e --l2p_dram_limit 10' 00:21:57.444 10:53:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:21:57.444 10:53:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:21:57.444 10:53:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:57.444 10:53:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7d43937c-7229-4a88-94a0-c6781d9a2d6e --l2p_dram_limit 10 -c nvc0n1p0 00:21:57.702 [2024-10-08 10:53:18.191100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.702 [2024-10-08 10:53:18.191150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:57.702 [2024-10-08 10:53:18.191163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:57.702 [2024-10-08 10:53:18.191169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.702 [2024-10-08 10:53:18.191215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.702 [2024-10-08 10:53:18.191223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:57.702 [2024-10-08 10:53:18.191233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:21:57.702 [2024-10-08 10:53:18.191243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.702 [2024-10-08 10:53:18.191262] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:57.702 [2024-10-08 10:53:18.191568] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:57.702 [2024-10-08 10:53:18.191588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.702 [2024-10-08 10:53:18.191595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:57.702 [2024-10-08 10:53:18.191603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:21:57.702 [2024-10-08 10:53:18.191611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.702 [2024-10-08 10:53:18.191723] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1ad8d3e5-9eb8-4391-a93c-2da83e17e49e 00:21:57.702 [2024-10-08 10:53:18.192646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.702 [2024-10-08 10:53:18.192671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:57.702 [2024-10-08 10:53:18.192679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:21:57.702 [2024-10-08 10:53:18.192689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.702 [2024-10-08 10:53:18.197274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.702 [2024-10-08 10:53:18.197303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:57.702 [2024-10-08 10:53:18.197311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.552 ms 00:21:57.702 [2024-10-08 10:53:18.197323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.702 [2024-10-08 10:53:18.197392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.702 [2024-10-08 10:53:18.197400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:57.702 [2024-10-08 10:53:18.197409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:57.702 [2024-10-08 10:53:18.197417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.702 [2024-10-08 10:53:18.197455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.702 [2024-10-08 10:53:18.197464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:57.702 [2024-10-08 10:53:18.197471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:57.702 [2024-10-08 10:53:18.197477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.702 [2024-10-08 10:53:18.197497] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:57.702 [2024-10-08 10:53:18.198733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.702 [2024-10-08 10:53:18.198758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:57.702 [2024-10-08 10:53:18.198768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.241 ms 00:21:57.702 [2024-10-08 10:53:18.198774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.702 [2024-10-08 10:53:18.198810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.702 [2024-10-08 10:53:18.198816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:57.702 [2024-10-08 10:53:18.198826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:57.702 [2024-10-08 10:53:18.198831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.702 [2024-10-08 10:53:18.198845] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:57.703 [2024-10-08 10:53:18.198951] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:57.703 [2024-10-08 10:53:18.198961] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:57.703 [2024-10-08 10:53:18.198969] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:57.703 [2024-10-08 10:53:18.198979] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:57.703 [2024-10-08 10:53:18.198985] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:57.703 [2024-10-08 10:53:18.198998] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:57.703 [2024-10-08 10:53:18.199005] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:57.703 [2024-10-08 10:53:18.199014] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:57.703 [2024-10-08 10:53:18.199021] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:57.703 [2024-10-08 10:53:18.199028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.703 [2024-10-08 10:53:18.199033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:57.703 [2024-10-08 10:53:18.199041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:21:57.703 [2024-10-08 10:53:18.199046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.703 [2024-10-08 10:53:18.199115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.703 [2024-10-08 10:53:18.199121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:57.703 [2024-10-08 10:53:18.199128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:57.703 [2024-10-08 10:53:18.199133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.703 [2024-10-08 10:53:18.199210] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:57.703 [2024-10-08 10:53:18.199223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:57.703 [2024-10-08 10:53:18.199230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:57.703 [2024-10-08 10:53:18.199236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:57.703 [2024-10-08 10:53:18.199244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:57.703 [2024-10-08 10:53:18.199249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:57.703 [2024-10-08 10:53:18.199256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:57.703 [2024-10-08 10:53:18.199261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:57.703 [2024-10-08 10:53:18.199267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:57.703 [2024-10-08 10:53:18.199272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:57.703 [2024-10-08 10:53:18.199278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:57.703 [2024-10-08 10:53:18.199283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:57.703 [2024-10-08 10:53:18.199292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:57.703 [2024-10-08 10:53:18.199298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:57.703 [2024-10-08 10:53:18.199305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:57.703 [2024-10-08 10:53:18.199310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:57.703 [2024-10-08 10:53:18.199317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:57.703 [2024-10-08 10:53:18.199322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:57.703 [2024-10-08 10:53:18.199328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:57.703 [2024-10-08 10:53:18.199334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:57.703 [2024-10-08 10:53:18.199340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:57.703 [2024-10-08 10:53:18.199345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:57.703 [2024-10-08 10:53:18.199351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:57.703 [2024-10-08 10:53:18.199356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:57.703 [2024-10-08 10:53:18.199362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:57.703 [2024-10-08 10:53:18.199367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:57.703 [2024-10-08 10:53:18.199374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:57.703 [2024-10-08 10:53:18.199379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:57.703 [2024-10-08 10:53:18.199387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:57.703 [2024-10-08 10:53:18.199393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:57.703 [2024-10-08 10:53:18.199400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:57.703 [2024-10-08 10:53:18.199406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:57.703 [2024-10-08 10:53:18.199413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:57.703 [2024-10-08 10:53:18.199418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:57.703 [2024-10-08 10:53:18.199425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:57.703 [2024-10-08 10:53:18.199431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:57.703 [2024-10-08 10:53:18.199438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:57.703 [2024-10-08 10:53:18.199443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:57.703 [2024-10-08 10:53:18.199450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:57.703 [2024-10-08 10:53:18.199456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:57.703 [2024-10-08 10:53:18.199464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:57.703 [2024-10-08 10:53:18.199470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:57.703 [2024-10-08 10:53:18.199477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:57.703 [2024-10-08 10:53:18.199482] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:57.703 [2024-10-08 10:53:18.199492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:57.703 [2024-10-08 10:53:18.199498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:57.703 [2024-10-08 10:53:18.199505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:57.703 [2024-10-08 10:53:18.199512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:57.703 [2024-10-08 10:53:18.199520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:57.703 [2024-10-08 10:53:18.199527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:57.703 [2024-10-08 10:53:18.199534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:57.703 [2024-10-08 10:53:18.199540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:57.703 [2024-10-08 10:53:18.199547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:57.703 [2024-10-08 10:53:18.199556] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:57.703 [2024-10-08 10:53:18.199565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:57.703 [2024-10-08 10:53:18.199572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:57.703 [2024-10-08 10:53:18.199579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:57.703 [2024-10-08 10:53:18.199585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:57.703 [2024-10-08 10:53:18.199593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:57.703 [2024-10-08 10:53:18.199599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:57.703 [2024-10-08 10:53:18.199607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:57.703 [2024-10-08 10:53:18.199614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:57.703 [2024-10-08 10:53:18.199621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:57.703 [2024-10-08 10:53:18.199627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:57.703 [2024-10-08 10:53:18.199636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:57.703 [2024-10-08 10:53:18.199642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:57.703 [2024-10-08 10:53:18.199649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:57.703 [2024-10-08 10:53:18.199655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:57.703 [2024-10-08 10:53:18.199663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:57.703 [2024-10-08 10:53:18.199669] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:57.703 [2024-10-08 10:53:18.199679] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:57.703 [2024-10-08 10:53:18.199685] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:57.703 [2024-10-08 10:53:18.199693] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:57.703 [2024-10-08 10:53:18.199699] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:57.703 [2024-10-08 10:53:18.199706] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:57.703 [2024-10-08 10:53:18.199712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:57.703 [2024-10-08 10:53:18.199722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:57.703 [2024-10-08 10:53:18.199728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:21:57.703 [2024-10-08 10:53:18.199735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.703 [2024-10-08 10:53:18.199767] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:57.703 [2024-10-08 10:53:18.199777] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:00.257 [2024-10-08 10:53:20.331296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.257 [2024-10-08 10:53:20.331352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:00.257 [2024-10-08 10:53:20.331369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2131.519 ms 00:22:00.257 [2024-10-08 10:53:20.331379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.257 [2024-10-08 10:53:20.339541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.257 [2024-10-08 10:53:20.339588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:00.257 [2024-10-08 10:53:20.339601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.099 ms 00:22:00.257 [2024-10-08 10:53:20.339613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.257 [2024-10-08 10:53:20.339704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.257 [2024-10-08 10:53:20.339714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:00.257 [2024-10-08 10:53:20.339726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:22:00.257 [2024-10-08 10:53:20.339735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.257 [2024-10-08 10:53:20.347424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.257 [2024-10-08 10:53:20.347465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:00.257 [2024-10-08 10:53:20.347475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.641 ms 00:22:00.257 [2024-10-08 10:53:20.347490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.257 [2024-10-08 10:53:20.347517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.257 [2024-10-08 10:53:20.347529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:00.257 [2024-10-08 10:53:20.347537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:00.257 [2024-10-08 10:53:20.347546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.257 [2024-10-08 10:53:20.347879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.257 [2024-10-08 10:53:20.347905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:00.257 [2024-10-08 10:53:20.347913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:22:00.257 [2024-10-08 10:53:20.347924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.257 [2024-10-08 10:53:20.348029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.257 [2024-10-08 10:53:20.348039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:00.257 [2024-10-08 10:53:20.348048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:22:00.257 [2024-10-08 10:53:20.348059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.257 [2024-10-08 10:53:20.366822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.257 [2024-10-08 10:53:20.366920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:00.258 [2024-10-08 10:53:20.366952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.712 ms 00:22:00.258 [2024-10-08 10:53:20.366978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.258 [2024-10-08 10:53:20.376571] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:00.258 [2024-10-08 10:53:20.379192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.258 [2024-10-08 10:53:20.379225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:00.258 [2024-10-08 10:53:20.379239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.006 ms 00:22:00.258 [2024-10-08 10:53:20.379247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.258 [2024-10-08 10:53:20.421908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.258 [2024-10-08 10:53:20.421960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:00.258 [2024-10-08 10:53:20.421976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.629 ms 00:22:00.258 [2024-10-08 10:53:20.421987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.258 [2024-10-08 10:53:20.422164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.258 [2024-10-08 10:53:20.422175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:00.258 [2024-10-08 10:53:20.422185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:22:00.258 [2024-10-08 10:53:20.422192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.258 [2024-10-08 10:53:20.425138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.258 [2024-10-08 10:53:20.425173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:00.258 [2024-10-08 10:53:20.425185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.925 ms 00:22:00.258 [2024-10-08 10:53:20.425193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.258 [2024-10-08 10:53:20.427511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.258 [2024-10-08 10:53:20.427545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:00.258 [2024-10-08 10:53:20.427557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.275 ms 00:22:00.258 [2024-10-08 10:53:20.427565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.258 [2024-10-08 10:53:20.427874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.258 [2024-10-08 10:53:20.427891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:00.258 [2024-10-08 10:53:20.427903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:22:00.258 [2024-10-08 10:53:20.427910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.258 [2024-10-08 10:53:20.452147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.258 [2024-10-08 10:53:20.452185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:00.258 [2024-10-08 10:53:20.452198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.212 ms 00:22:00.258 [2024-10-08 10:53:20.452206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.258 [2024-10-08 10:53:20.455898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.258 [2024-10-08 10:53:20.455934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:00.258 [2024-10-08 10:53:20.455945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.641 ms 00:22:00.258 [2024-10-08 10:53:20.455953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.258 [2024-10-08 10:53:20.459013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.258 [2024-10-08 10:53:20.459044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:00.258 [2024-10-08 10:53:20.459055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.024 ms 00:22:00.258 [2024-10-08 10:53:20.459061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.258 [2024-10-08 10:53:20.461960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.258 [2024-10-08 10:53:20.461993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:00.258 [2024-10-08 10:53:20.462006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.863 ms 00:22:00.258 [2024-10-08 10:53:20.462013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.258 [2024-10-08 10:53:20.462050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.258 [2024-10-08 10:53:20.462059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:00.258 [2024-10-08 10:53:20.462069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:00.258 [2024-10-08 10:53:20.462077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.258 [2024-10-08 10:53:20.462141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.258 [2024-10-08 10:53:20.462149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:00.258 [2024-10-08 10:53:20.462159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:22:00.258 [2024-10-08 10:53:20.462166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.258 [2024-10-08 10:53:20.463033] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2271.531 ms, result 0 00:22:00.258 { 00:22:00.258 "name": "ftl0", 00:22:00.258 "uuid": "1ad8d3e5-9eb8-4391-a93c-2da83e17e49e" 00:22:00.258 } 00:22:00.258 10:53:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:22:00.258 10:53:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:00.258 10:53:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:22:00.258 10:53:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:22:00.258 10:53:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:22:00.516 /dev/nbd0 00:22:00.516 10:53:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:22:00.516 10:53:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:22:00.516 10:53:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:22:00.516 10:53:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:00.516 10:53:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:00.516 10:53:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:22:00.516 10:53:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:22:00.516 10:53:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:00.516 10:53:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:00.516 10:53:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:22:00.516 1+0 records in 00:22:00.516 1+0 records out 00:22:00.516 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331046 s, 12.4 MB/s 00:22:00.516 10:53:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:00.516 10:53:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:22:00.516 10:53:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:00.516 10:53:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:00.516 10:53:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:22:00.516 10:53:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:22:00.516 [2024-10-08 10:53:21.050634] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:22:00.516 [2024-10-08 10:53:21.050756] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90769 ] 00:22:00.774 [2024-10-08 10:53:21.177989] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:00.774 [2024-10-08 10:53:21.197482] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.774 [2024-10-08 10:53:21.229223] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.707  [2024-10-08T10:53:23.657Z] Copying: 195/1024 [MB] (195 MBps) [2024-10-08T10:53:24.591Z] Copying: 391/1024 [MB] (195 MBps) [2024-10-08T10:53:25.525Z] Copying: 639/1024 [MB] (248 MBps) [2024-10-08T10:53:25.783Z] Copying: 896/1024 [MB] (256 MBps) [2024-10-08T10:53:26.040Z] Copying: 1024/1024 [MB] (average 227 MBps) 00:22:05.463 00:22:05.463 10:53:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:07.378 10:53:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:22:07.378 [2024-10-08 10:53:27.593594] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:22:07.378 [2024-10-08 10:53:27.593691] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90840 ] 00:22:07.378 [2024-10-08 10:53:27.715302] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:07.378 [2024-10-08 10:53:27.737535] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.378 [2024-10-08 10:53:27.768722] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:08.333  [2024-10-08T10:53:29.846Z] Copying: 26/1024 [MB] (26 MBps) [2024-10-08T10:53:31.217Z] Copying: 47/1024 [MB] (20 MBps) [2024-10-08T10:53:32.148Z] Copying: 77/1024 [MB] (30 MBps) [2024-10-08T10:53:33.115Z] Copying: 107/1024 [MB] (29 MBps) [2024-10-08T10:53:34.049Z] Copying: 139/1024 [MB] (31 MBps) [2024-10-08T10:53:34.981Z] Copying: 174/1024 [MB] (35 MBps) [2024-10-08T10:53:35.913Z] Copying: 209/1024 [MB] (34 MBps) [2024-10-08T10:53:36.895Z] Copying: 240/1024 [MB] (31 MBps) [2024-10-08T10:53:37.826Z] Copying: 272/1024 [MB] (31 MBps) [2024-10-08T10:53:39.198Z] Copying: 303/1024 [MB] (31 MBps) [2024-10-08T10:53:40.131Z] Copying: 334/1024 [MB] (30 MBps) [2024-10-08T10:53:41.100Z] Copying: 370/1024 [MB] (36 MBps) [2024-10-08T10:53:42.035Z] Copying: 405/1024 [MB] (34 MBps) [2024-10-08T10:53:42.968Z] Copying: 436/1024 [MB] (31 MBps) [2024-10-08T10:53:43.901Z] Copying: 467/1024 [MB] (30 MBps) [2024-10-08T10:53:44.834Z] Copying: 499/1024 [MB] (31 MBps) [2024-10-08T10:53:46.237Z] Copying: 531/1024 [MB] (31 MBps) [2024-10-08T10:53:47.170Z] Copying: 564/1024 [MB] (33 MBps) [2024-10-08T10:53:48.101Z] Copying: 593/1024 [MB] (28 MBps) [2024-10-08T10:53:49.052Z] Copying: 624/1024 [MB] (31 MBps) [2024-10-08T10:53:49.984Z] Copying: 656/1024 [MB] (32 MBps) [2024-10-08T10:53:50.916Z] Copying: 687/1024 [MB] (31 MBps) [2024-10-08T10:53:51.886Z] Copying: 718/1024 [MB] (31 MBps) [2024-10-08T10:53:52.818Z] Copying: 752/1024 [MB] (33 MBps) [2024-10-08T10:53:54.190Z] Copying: 783/1024 [MB] (30 MBps) [2024-10-08T10:53:55.192Z] Copying: 814/1024 [MB] (31 MBps) [2024-10-08T10:53:56.125Z] Copying: 846/1024 [MB] (31 MBps) [2024-10-08T10:53:57.060Z] Copying: 884/1024 [MB] (37 MBps) [2024-10-08T10:53:57.994Z] Copying: 917/1024 [MB] (33 MBps) [2024-10-08T10:53:58.942Z] Copying: 948/1024 [MB] (31 MBps) [2024-10-08T10:53:59.874Z] Copying: 983/1024 [MB] (34 MBps) [2024-10-08T10:54:00.133Z] Copying: 1015/1024 [MB] (32 MBps) [2024-10-08T10:54:00.392Z] Copying: 1024/1024 [MB] (average 31 MBps) 00:22:39.815 00:22:39.815 10:54:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:22:39.815 10:54:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:22:40.074 10:54:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:40.074 [2024-10-08 10:54:00.581737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.074 [2024-10-08 10:54:00.581781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:40.074 [2024-10-08 10:54:00.581793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:40.074 [2024-10-08 10:54:00.581809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.075 [2024-10-08 10:54:00.581827] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:40.075 [2024-10-08 10:54:00.582250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.075 [2024-10-08 10:54:00.582380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:40.075 [2024-10-08 10:54:00.582401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:22:40.075 [2024-10-08 10:54:00.582407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.075 [2024-10-08 10:54:00.584181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.075 [2024-10-08 10:54:00.584214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:40.075 [2024-10-08 10:54:00.584224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.747 ms 00:22:40.075 [2024-10-08 10:54:00.584230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.075 [2024-10-08 10:54:00.596162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.075 [2024-10-08 10:54:00.596191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:40.075 [2024-10-08 10:54:00.596201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.914 ms 00:22:40.075 [2024-10-08 10:54:00.596207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.075 [2024-10-08 10:54:00.601152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.075 [2024-10-08 10:54:00.601260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:40.075 [2024-10-08 10:54:00.601277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.912 ms 00:22:40.075 [2024-10-08 10:54:00.601285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.075 [2024-10-08 10:54:00.602321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.075 [2024-10-08 10:54:00.602350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:40.075 [2024-10-08 10:54:00.602360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.979 ms 00:22:40.075 [2024-10-08 10:54:00.602366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.075 [2024-10-08 10:54:00.605865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.075 [2024-10-08 10:54:00.605893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:40.075 [2024-10-08 10:54:00.605907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.468 ms 00:22:40.075 [2024-10-08 10:54:00.605913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.075 [2024-10-08 10:54:00.606011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.075 [2024-10-08 10:54:00.606019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:40.075 [2024-10-08 10:54:00.606028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:22:40.075 [2024-10-08 10:54:00.606034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.075 [2024-10-08 10:54:00.607833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.075 [2024-10-08 10:54:00.607934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:40.075 [2024-10-08 10:54:00.607949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.778 ms 00:22:40.075 [2024-10-08 10:54:00.607955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.075 [2024-10-08 10:54:00.609238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.075 [2024-10-08 10:54:00.609265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:40.075 [2024-10-08 10:54:00.609274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.251 ms 00:22:40.075 [2024-10-08 10:54:00.609279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.075 [2024-10-08 10:54:00.610306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.075 [2024-10-08 10:54:00.610334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:40.075 [2024-10-08 10:54:00.610342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.995 ms 00:22:40.075 [2024-10-08 10:54:00.610347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.075 [2024-10-08 10:54:00.611159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.075 [2024-10-08 10:54:00.611250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:40.075 [2024-10-08 10:54:00.611265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.762 ms 00:22:40.075 [2024-10-08 10:54:00.611270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.075 [2024-10-08 10:54:00.611295] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:40.075 [2024-10-08 10:54:00.611307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:40.075 [2024-10-08 10:54:00.611606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.611999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.612008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:40.076 [2024-10-08 10:54:00.612020] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:40.076 [2024-10-08 10:54:00.612027] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1ad8d3e5-9eb8-4391-a93c-2da83e17e49e 00:22:40.076 [2024-10-08 10:54:00.612035] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:40.076 [2024-10-08 10:54:00.612043] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:40.076 [2024-10-08 10:54:00.612051] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:40.076 [2024-10-08 10:54:00.612058] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:40.076 [2024-10-08 10:54:00.612064] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:40.076 [2024-10-08 10:54:00.612071] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:40.076 [2024-10-08 10:54:00.612076] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:40.076 [2024-10-08 10:54:00.612082] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:40.076 [2024-10-08 10:54:00.612087] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:40.076 [2024-10-08 10:54:00.612094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.076 [2024-10-08 10:54:00.612102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:40.076 [2024-10-08 10:54:00.612111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.800 ms 00:22:40.076 [2024-10-08 10:54:00.612117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.076 [2024-10-08 10:54:00.613404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.076 [2024-10-08 10:54:00.613422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:40.076 [2024-10-08 10:54:00.613430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.267 ms 00:22:40.076 [2024-10-08 10:54:00.613436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.076 [2024-10-08 10:54:00.613506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.076 [2024-10-08 10:54:00.613513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:40.076 [2024-10-08 10:54:00.613521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:22:40.076 [2024-10-08 10:54:00.613528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.076 [2024-10-08 10:54:00.618359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.076 [2024-10-08 10:54:00.618448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:40.076 [2024-10-08 10:54:00.618495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.076 [2024-10-08 10:54:00.618518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.076 [2024-10-08 10:54:00.618607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.076 [2024-10-08 10:54:00.618634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:40.076 [2024-10-08 10:54:00.618678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.076 [2024-10-08 10:54:00.618721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.076 [2024-10-08 10:54:00.618833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.076 [2024-10-08 10:54:00.618882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:40.076 [2024-10-08 10:54:00.618933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.076 [2024-10-08 10:54:00.618951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.077 [2024-10-08 10:54:00.618978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.077 [2024-10-08 10:54:00.618994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:40.077 [2024-10-08 10:54:00.619012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.077 [2024-10-08 10:54:00.619026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.077 [2024-10-08 10:54:00.626881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.077 [2024-10-08 10:54:00.626999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:40.077 [2024-10-08 10:54:00.627081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.077 [2024-10-08 10:54:00.627099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.077 [2024-10-08 10:54:00.633943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.077 [2024-10-08 10:54:00.634059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:40.077 [2024-10-08 10:54:00.634105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.077 [2024-10-08 10:54:00.634123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.077 [2024-10-08 10:54:00.634193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.077 [2024-10-08 10:54:00.634282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:40.077 [2024-10-08 10:54:00.634302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.077 [2024-10-08 10:54:00.634318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.077 [2024-10-08 10:54:00.634360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.077 [2024-10-08 10:54:00.634451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:40.077 [2024-10-08 10:54:00.634471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.077 [2024-10-08 10:54:00.634486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.077 [2024-10-08 10:54:00.634552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.077 [2024-10-08 10:54:00.634649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:40.077 [2024-10-08 10:54:00.634670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.077 [2024-10-08 10:54:00.634685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.077 [2024-10-08 10:54:00.634726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.077 [2024-10-08 10:54:00.634826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:40.077 [2024-10-08 10:54:00.634847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.077 [2024-10-08 10:54:00.634862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.077 [2024-10-08 10:54:00.634906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.077 [2024-10-08 10:54:00.634990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:40.077 [2024-10-08 10:54:00.635011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.077 [2024-10-08 10:54:00.635026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.077 [2024-10-08 10:54:00.635070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.077 [2024-10-08 10:54:00.635178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:40.077 [2024-10-08 10:54:00.635199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.077 [2024-10-08 10:54:00.635214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.077 [2024-10-08 10:54:00.635333] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 53.566 ms, result 0 00:22:40.077 true 00:22:40.336 10:54:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 90638 00:22:40.336 10:54:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid90638 00:22:40.336 10:54:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:22:40.336 [2024-10-08 10:54:00.702846] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:22:40.336 [2024-10-08 10:54:00.703071] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91192 ] 00:22:40.336 [2024-10-08 10:54:00.826346] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:40.336 [2024-10-08 10:54:00.844278] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.336 [2024-10-08 10:54:00.873648] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.712  [2024-10-08T10:54:03.224Z] Copying: 257/1024 [MB] (257 MBps) [2024-10-08T10:54:04.158Z] Copying: 518/1024 [MB] (260 MBps) [2024-10-08T10:54:05.093Z] Copying: 774/1024 [MB] (255 MBps) [2024-10-08T10:54:05.093Z] Copying: 1024/1024 [MB] (average 257 MBps) 00:22:44.516 00:22:44.516 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 90638 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:22:44.516 10:54:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:44.776 [2024-10-08 10:54:05.123858] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:22:44.776 [2024-10-08 10:54:05.123978] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91236 ] 00:22:44.776 [2024-10-08 10:54:05.259920] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:44.776 [2024-10-08 10:54:05.280841] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.776 [2024-10-08 10:54:05.314700] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.036 [2024-10-08 10:54:05.403368] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:45.036 [2024-10-08 10:54:05.403431] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:45.036 [2024-10-08 10:54:05.466601] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:22:45.036 [2024-10-08 10:54:05.467048] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:22:45.036 [2024-10-08 10:54:05.467312] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:22:45.297 [2024-10-08 10:54:05.779878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.297 [2024-10-08 10:54:05.779933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:45.297 [2024-10-08 10:54:05.779950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:45.297 [2024-10-08 10:54:05.779959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.297 [2024-10-08 10:54:05.780010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.297 [2024-10-08 10:54:05.780023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:45.297 [2024-10-08 10:54:05.780031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:22:45.297 [2024-10-08 10:54:05.780041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.297 [2024-10-08 10:54:05.780061] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:45.297 [2024-10-08 10:54:05.780299] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:45.297 [2024-10-08 10:54:05.780313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.297 [2024-10-08 10:54:05.780321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:45.297 [2024-10-08 10:54:05.780329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:22:45.297 [2024-10-08 10:54:05.780337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.297 [2024-10-08 10:54:05.781388] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:45.297 [2024-10-08 10:54:05.783723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.297 [2024-10-08 10:54:05.783762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:45.297 [2024-10-08 10:54:05.783774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.337 ms 00:22:45.297 [2024-10-08 10:54:05.783787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.297 [2024-10-08 10:54:05.783852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.297 [2024-10-08 10:54:05.783863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:45.297 [2024-10-08 10:54:05.783871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:45.297 [2024-10-08 10:54:05.783883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.297 [2024-10-08 10:54:05.788326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.297 [2024-10-08 10:54:05.788454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:45.297 [2024-10-08 10:54:05.788469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.386 ms 00:22:45.297 [2024-10-08 10:54:05.788482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.297 [2024-10-08 10:54:05.788562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.297 [2024-10-08 10:54:05.788572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:45.297 [2024-10-08 10:54:05.788584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:22:45.297 [2024-10-08 10:54:05.788591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.297 [2024-10-08 10:54:05.788633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.297 [2024-10-08 10:54:05.788642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:45.297 [2024-10-08 10:54:05.788655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:45.297 [2024-10-08 10:54:05.788661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.297 [2024-10-08 10:54:05.788683] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:45.297 [2024-10-08 10:54:05.789974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.297 [2024-10-08 10:54:05.789997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:45.297 [2024-10-08 10:54:05.790006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.296 ms 00:22:45.297 [2024-10-08 10:54:05.790014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.297 [2024-10-08 10:54:05.790048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.297 [2024-10-08 10:54:05.790057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:45.297 [2024-10-08 10:54:05.790065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:45.297 [2024-10-08 10:54:05.790073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.297 [2024-10-08 10:54:05.790096] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:45.297 [2024-10-08 10:54:05.790114] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:45.297 [2024-10-08 10:54:05.790152] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:45.297 [2024-10-08 10:54:05.790178] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:45.297 [2024-10-08 10:54:05.790286] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:45.297 [2024-10-08 10:54:05.790301] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:45.297 [2024-10-08 10:54:05.790312] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:45.297 [2024-10-08 10:54:05.790323] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:45.297 [2024-10-08 10:54:05.790331] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:45.297 [2024-10-08 10:54:05.790339] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:45.297 [2024-10-08 10:54:05.790346] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:45.297 [2024-10-08 10:54:05.790353] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:45.297 [2024-10-08 10:54:05.790363] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:45.297 [2024-10-08 10:54:05.790370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.297 [2024-10-08 10:54:05.790379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:45.297 [2024-10-08 10:54:05.790388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:22:45.297 [2024-10-08 10:54:05.790396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.297 [2024-10-08 10:54:05.790478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.297 [2024-10-08 10:54:05.790489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:45.297 [2024-10-08 10:54:05.790500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:45.297 [2024-10-08 10:54:05.790507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.297 [2024-10-08 10:54:05.790602] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:45.297 [2024-10-08 10:54:05.790611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:45.297 [2024-10-08 10:54:05.790625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:45.297 [2024-10-08 10:54:05.790634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:45.297 [2024-10-08 10:54:05.790647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:45.297 [2024-10-08 10:54:05.790655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:45.297 [2024-10-08 10:54:05.790667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:45.298 [2024-10-08 10:54:05.790675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:45.298 [2024-10-08 10:54:05.790683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:45.298 [2024-10-08 10:54:05.790691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:45.298 [2024-10-08 10:54:05.790699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:45.298 [2024-10-08 10:54:05.790706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:45.298 [2024-10-08 10:54:05.790713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:45.298 [2024-10-08 10:54:05.790721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:45.298 [2024-10-08 10:54:05.790728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:45.298 [2024-10-08 10:54:05.790735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:45.298 [2024-10-08 10:54:05.790744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:45.298 [2024-10-08 10:54:05.790751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:45.298 [2024-10-08 10:54:05.790758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:45.298 [2024-10-08 10:54:05.790766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:45.298 [2024-10-08 10:54:05.790774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:45.298 [2024-10-08 10:54:05.790781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:45.298 [2024-10-08 10:54:05.790805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:45.298 [2024-10-08 10:54:05.790813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:45.298 [2024-10-08 10:54:05.790821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:45.298 [2024-10-08 10:54:05.790828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:45.298 [2024-10-08 10:54:05.790836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:45.298 [2024-10-08 10:54:05.790844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:45.298 [2024-10-08 10:54:05.790852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:45.298 [2024-10-08 10:54:05.790859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:45.298 [2024-10-08 10:54:05.790866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:45.298 [2024-10-08 10:54:05.790874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:45.298 [2024-10-08 10:54:05.790882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:45.298 [2024-10-08 10:54:05.790890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:45.298 [2024-10-08 10:54:05.790898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:45.298 [2024-10-08 10:54:05.790905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:45.298 [2024-10-08 10:54:05.790912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:45.298 [2024-10-08 10:54:05.790920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:45.298 [2024-10-08 10:54:05.790930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:45.298 [2024-10-08 10:54:05.790937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:45.298 [2024-10-08 10:54:05.790945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:45.298 [2024-10-08 10:54:05.790952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:45.298 [2024-10-08 10:54:05.790960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:45.298 [2024-10-08 10:54:05.790967] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:45.298 [2024-10-08 10:54:05.790976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:45.298 [2024-10-08 10:54:05.790985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:45.298 [2024-10-08 10:54:05.790994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:45.298 [2024-10-08 10:54:05.791002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:45.298 [2024-10-08 10:54:05.791010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:45.298 [2024-10-08 10:54:05.791017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:45.298 [2024-10-08 10:54:05.791024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:45.298 [2024-10-08 10:54:05.791030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:45.298 [2024-10-08 10:54:05.791037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:45.298 [2024-10-08 10:54:05.791045] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:45.298 [2024-10-08 10:54:05.791056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:45.298 [2024-10-08 10:54:05.791064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:45.298 [2024-10-08 10:54:05.791071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:45.298 [2024-10-08 10:54:05.791078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:45.298 [2024-10-08 10:54:05.791085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:45.298 [2024-10-08 10:54:05.791092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:45.298 [2024-10-08 10:54:05.791099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:45.298 [2024-10-08 10:54:05.791106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:45.298 [2024-10-08 10:54:05.791113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:45.298 [2024-10-08 10:54:05.791121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:45.298 [2024-10-08 10:54:05.791128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:45.298 [2024-10-08 10:54:05.791135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:45.298 [2024-10-08 10:54:05.791142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:45.298 [2024-10-08 10:54:05.791149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:45.298 [2024-10-08 10:54:05.791156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:45.298 [2024-10-08 10:54:05.791163] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:45.298 [2024-10-08 10:54:05.791173] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:45.298 [2024-10-08 10:54:05.791183] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:45.298 [2024-10-08 10:54:05.791190] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:45.298 [2024-10-08 10:54:05.791198] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:45.298 [2024-10-08 10:54:05.791204] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:45.298 [2024-10-08 10:54:05.791212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.298 [2024-10-08 10:54:05.791223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:45.298 [2024-10-08 10:54:05.791230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:22:45.298 [2024-10-08 10:54:05.791238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.298 [2024-10-08 10:54:05.807892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.298 [2024-10-08 10:54:05.808084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:45.298 [2024-10-08 10:54:05.808159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.612 ms 00:22:45.298 [2024-10-08 10:54:05.808190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.298 [2024-10-08 10:54:05.808327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.298 [2024-10-08 10:54:05.808357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:45.298 [2024-10-08 10:54:05.808426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:22:45.298 [2024-10-08 10:54:05.808480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.298 [2024-10-08 10:54:05.816574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.298 [2024-10-08 10:54:05.816705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:45.298 [2024-10-08 10:54:05.816768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.994 ms 00:22:45.298 [2024-10-08 10:54:05.816806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.298 [2024-10-08 10:54:05.817190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.298 [2024-10-08 10:54:05.817281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:45.298 [2024-10-08 10:54:05.817333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:45.298 [2024-10-08 10:54:05.817357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.298 [2024-10-08 10:54:05.817722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.298 [2024-10-08 10:54:05.817834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:45.298 [2024-10-08 10:54:05.817884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:22:45.298 [2024-10-08 10:54:05.817907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.298 [2024-10-08 10:54:05.818047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.298 [2024-10-08 10:54:05.818077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:45.298 [2024-10-08 10:54:05.818128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:22:45.298 [2024-10-08 10:54:05.818150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.298 [2024-10-08 10:54:05.822598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.299 [2024-10-08 10:54:05.822693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:45.299 [2024-10-08 10:54:05.822743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.415 ms 00:22:45.299 [2024-10-08 10:54:05.822770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.299 [2024-10-08 10:54:05.824973] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:45.299 [2024-10-08 10:54:05.825085] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:45.299 [2024-10-08 10:54:05.825144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.299 [2024-10-08 10:54:05.825164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:45.299 [2024-10-08 10:54:05.825189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.234 ms 00:22:45.299 [2024-10-08 10:54:05.825214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.299 [2024-10-08 10:54:05.839673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.299 [2024-10-08 10:54:05.839781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:45.299 [2024-10-08 10:54:05.839850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.408 ms 00:22:45.299 [2024-10-08 10:54:05.839873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.299 [2024-10-08 10:54:05.841557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.299 [2024-10-08 10:54:05.841673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:45.299 [2024-10-08 10:54:05.841769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.600 ms 00:22:45.299 [2024-10-08 10:54:05.841805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.299 [2024-10-08 10:54:05.843231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.299 [2024-10-08 10:54:05.843331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:45.299 [2024-10-08 10:54:05.843381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.384 ms 00:22:45.299 [2024-10-08 10:54:05.843403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.299 [2024-10-08 10:54:05.843733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.299 [2024-10-08 10:54:05.843843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:45.299 [2024-10-08 10:54:05.843893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:22:45.299 [2024-10-08 10:54:05.843915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.299 [2024-10-08 10:54:05.858456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.299 [2024-10-08 10:54:05.858622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:45.299 [2024-10-08 10:54:05.858676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.510 ms 00:22:45.299 [2024-10-08 10:54:05.858700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.299 [2024-10-08 10:54:05.866099] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:45.299 [2024-10-08 10:54:05.868739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.299 [2024-10-08 10:54:05.868854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:45.299 [2024-10-08 10:54:05.868918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.989 ms 00:22:45.299 [2024-10-08 10:54:05.868942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.299 [2024-10-08 10:54:05.869026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.299 [2024-10-08 10:54:05.869362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:45.299 [2024-10-08 10:54:05.869449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:45.299 [2024-10-08 10:54:05.869484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.299 [2024-10-08 10:54:05.869640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.299 [2024-10-08 10:54:05.869709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:45.299 [2024-10-08 10:54:05.869774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:22:45.299 [2024-10-08 10:54:05.869808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.299 [2024-10-08 10:54:05.869850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.299 [2024-10-08 10:54:05.869964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:45.299 [2024-10-08 10:54:05.869993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:45.299 [2024-10-08 10:54:05.870014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.299 [2024-10-08 10:54:05.870061] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:45.593 [2024-10-08 10:54:05.870174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.593 [2024-10-08 10:54:05.870185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:45.594 [2024-10-08 10:54:05.870199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:22:45.594 [2024-10-08 10:54:05.870207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.594 [2024-10-08 10:54:05.873245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.594 [2024-10-08 10:54:05.873354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:45.594 [2024-10-08 10:54:05.873368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.013 ms 00:22:45.594 [2024-10-08 10:54:05.873377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.594 [2024-10-08 10:54:05.873446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.594 [2024-10-08 10:54:05.873459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:45.594 [2024-10-08 10:54:05.873467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:22:45.594 [2024-10-08 10:54:05.873474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.594 [2024-10-08 10:54:05.874424] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 94.151 ms, result 0 00:22:46.526  [2024-10-08T10:54:08.039Z] Copying: 40/1024 [MB] (40 MBps) [2024-10-08T10:54:08.977Z] Copying: 68/1024 [MB] (28 MBps) [2024-10-08T10:54:09.943Z] Copying: 90/1024 [MB] (21 MBps) [2024-10-08T10:54:11.327Z] Copying: 111/1024 [MB] (21 MBps) [2024-10-08T10:54:11.895Z] Copying: 134/1024 [MB] (22 MBps) [2024-10-08T10:54:13.280Z] Copying: 157/1024 [MB] (23 MBps) [2024-10-08T10:54:14.222Z] Copying: 170/1024 [MB] (12 MBps) [2024-10-08T10:54:15.204Z] Copying: 181/1024 [MB] (10 MBps) [2024-10-08T10:54:16.148Z] Copying: 192/1024 [MB] (10 MBps) [2024-10-08T10:54:17.093Z] Copying: 203/1024 [MB] (10 MBps) [2024-10-08T10:54:18.040Z] Copying: 213/1024 [MB] (10 MBps) [2024-10-08T10:54:18.981Z] Copying: 223/1024 [MB] (10 MBps) [2024-10-08T10:54:20.048Z] Copying: 234/1024 [MB] (10 MBps) [2024-10-08T10:54:20.990Z] Copying: 274/1024 [MB] (40 MBps) [2024-10-08T10:54:21.934Z] Copying: 292/1024 [MB] (18 MBps) [2024-10-08T10:54:23.315Z] Copying: 309/1024 [MB] (17 MBps) [2024-10-08T10:54:24.258Z] Copying: 334/1024 [MB] (25 MBps) [2024-10-08T10:54:25.201Z] Copying: 367/1024 [MB] (32 MBps) [2024-10-08T10:54:26.143Z] Copying: 389/1024 [MB] (22 MBps) [2024-10-08T10:54:27.086Z] Copying: 404/1024 [MB] (15 MBps) [2024-10-08T10:54:28.028Z] Copying: 420/1024 [MB] (15 MBps) [2024-10-08T10:54:28.974Z] Copying: 438/1024 [MB] (17 MBps) [2024-10-08T10:54:29.914Z] Copying: 462/1024 [MB] (23 MBps) [2024-10-08T10:54:31.297Z] Copying: 484/1024 [MB] (22 MBps) [2024-10-08T10:54:32.239Z] Copying: 508/1024 [MB] (23 MBps) [2024-10-08T10:54:33.209Z] Copying: 533/1024 [MB] (25 MBps) [2024-10-08T10:54:34.149Z] Copying: 555/1024 [MB] (21 MBps) [2024-10-08T10:54:35.094Z] Copying: 581/1024 [MB] (26 MBps) [2024-10-08T10:54:36.035Z] Copying: 599/1024 [MB] (17 MBps) [2024-10-08T10:54:36.968Z] Copying: 613/1024 [MB] (14 MBps) [2024-10-08T10:54:37.912Z] Copying: 650/1024 [MB] (36 MBps) [2024-10-08T10:54:38.921Z] Copying: 679/1024 [MB] (28 MBps) [2024-10-08T10:54:40.309Z] Copying: 698/1024 [MB] (18 MBps) [2024-10-08T10:54:41.254Z] Copying: 719/1024 [MB] (21 MBps) [2024-10-08T10:54:42.199Z] Copying: 738/1024 [MB] (18 MBps) [2024-10-08T10:54:43.145Z] Copying: 758/1024 [MB] (20 MBps) [2024-10-08T10:54:44.090Z] Copying: 770/1024 [MB] (12 MBps) [2024-10-08T10:54:45.035Z] Copying: 781/1024 [MB] (10 MBps) [2024-10-08T10:54:45.978Z] Copying: 810164/1048576 [kB] (10220 kBps) [2024-10-08T10:54:46.917Z] Copying: 801/1024 [MB] (10 MBps) [2024-10-08T10:54:48.304Z] Copying: 834/1024 [MB] (33 MBps) [2024-10-08T10:54:49.273Z] Copying: 854/1024 [MB] (19 MBps) [2024-10-08T10:54:50.216Z] Copying: 873/1024 [MB] (19 MBps) [2024-10-08T10:54:51.157Z] Copying: 894/1024 [MB] (21 MBps) [2024-10-08T10:54:52.101Z] Copying: 913/1024 [MB] (18 MBps) [2024-10-08T10:54:53.046Z] Copying: 934/1024 [MB] (21 MBps) [2024-10-08T10:54:53.985Z] Copying: 953/1024 [MB] (18 MBps) [2024-10-08T10:54:54.929Z] Copying: 974/1024 [MB] (21 MBps) [2024-10-08T10:54:56.316Z] Copying: 997/1024 [MB] (22 MBps) [2024-10-08T10:54:57.258Z] Copying: 1031800/1048576 [kB] (10180 kBps) [2024-10-08T10:54:57.829Z] Copying: 1023/1024 [MB] (15 MBps) [2024-10-08T10:54:57.829Z] Copying: 1024/1024 [MB] (average 19 MBps)[2024-10-08 10:54:57.732311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.252 [2024-10-08 10:54:57.732744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:37.252 [2024-10-08 10:54:57.732770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:37.252 [2024-10-08 10:54:57.732780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.252 [2024-10-08 10:54:57.736116] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:37.252 [2024-10-08 10:54:57.737288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.252 [2024-10-08 10:54:57.737324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:37.252 [2024-10-08 10:54:57.737335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.125 ms 00:23:37.252 [2024-10-08 10:54:57.737343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.252 [2024-10-08 10:54:57.749631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.252 [2024-10-08 10:54:57.749668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:37.252 [2024-10-08 10:54:57.749687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.511 ms 00:23:37.252 [2024-10-08 10:54:57.749695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.252 [2024-10-08 10:54:57.772512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.252 [2024-10-08 10:54:57.772545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:37.252 [2024-10-08 10:54:57.772565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.801 ms 00:23:37.252 [2024-10-08 10:54:57.772573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.252 [2024-10-08 10:54:57.778763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.252 [2024-10-08 10:54:57.778805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:37.252 [2024-10-08 10:54:57.778815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.159 ms 00:23:37.252 [2024-10-08 10:54:57.778823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.252 [2024-10-08 10:54:57.781126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.252 [2024-10-08 10:54:57.781157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:37.252 [2024-10-08 10:54:57.781166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.253 ms 00:23:37.252 [2024-10-08 10:54:57.781173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.252 [2024-10-08 10:54:57.784414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.252 [2024-10-08 10:54:57.784451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:37.252 [2024-10-08 10:54:57.784460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.211 ms 00:23:37.252 [2024-10-08 10:54:57.784467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.515 [2024-10-08 10:54:57.913201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.515 [2024-10-08 10:54:57.913248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:37.515 [2024-10-08 10:54:57.913259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 128.703 ms 00:23:37.515 [2024-10-08 10:54:57.913266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.515 [2024-10-08 10:54:57.915220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.515 [2024-10-08 10:54:57.915249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:37.515 [2024-10-08 10:54:57.915258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.939 ms 00:23:37.515 [2024-10-08 10:54:57.915265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.515 [2024-10-08 10:54:57.916628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.515 [2024-10-08 10:54:57.916667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:37.515 [2024-10-08 10:54:57.916675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.336 ms 00:23:37.515 [2024-10-08 10:54:57.916681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.515 [2024-10-08 10:54:57.917965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.515 [2024-10-08 10:54:57.917994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:37.515 [2024-10-08 10:54:57.918002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.257 ms 00:23:37.515 [2024-10-08 10:54:57.918008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.515 [2024-10-08 10:54:57.919068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.515 [2024-10-08 10:54:57.919098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:37.515 [2024-10-08 10:54:57.919107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.012 ms 00:23:37.515 [2024-10-08 10:54:57.919113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.515 [2024-10-08 10:54:57.919139] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:37.515 [2024-10-08 10:54:57.919152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 104960 / 261120 wr_cnt: 1 state: open 00:23:37.515 [2024-10-08 10:54:57.919166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:37.515 [2024-10-08 10:54:57.919174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:37.515 [2024-10-08 10:54:57.919182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:37.515 [2024-10-08 10:54:57.919190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:37.515 [2024-10-08 10:54:57.919197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:37.515 [2024-10-08 10:54:57.919204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:37.515 [2024-10-08 10:54:57.919212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:37.515 [2024-10-08 10:54:57.919220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:37.515 [2024-10-08 10:54:57.919227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:37.515 [2024-10-08 10:54:57.919236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:37.515 [2024-10-08 10:54:57.919243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:37.515 [2024-10-08 10:54:57.919251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:37.515 [2024-10-08 10:54:57.919258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:37.515 [2024-10-08 10:54:57.919265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:37.515 [2024-10-08 10:54:57.919273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:37.515 [2024-10-08 10:54:57.919280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:37.515 [2024-10-08 10:54:57.919287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:37.515 [2024-10-08 10:54:57.919294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:37.515 [2024-10-08 10:54:57.919301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:37.515 [2024-10-08 10:54:57.919309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:37.515 [2024-10-08 10:54:57.919316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:37.515 [2024-10-08 10:54:57.919324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:37.515 [2024-10-08 10:54:57.919331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:37.515 [2024-10-08 10:54:57.919338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:37.515 [2024-10-08 10:54:57.919345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:37.516 [2024-10-08 10:54:57.919914] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:37.516 [2024-10-08 10:54:57.919922] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1ad8d3e5-9eb8-4391-a93c-2da83e17e49e 00:23:37.516 [2024-10-08 10:54:57.919933] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 104960 00:23:37.516 [2024-10-08 10:54:57.919940] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 105920 00:23:37.516 [2024-10-08 10:54:57.919947] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 104960 00:23:37.516 [2024-10-08 10:54:57.919955] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0091 00:23:37.516 [2024-10-08 10:54:57.919964] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:37.516 [2024-10-08 10:54:57.919972] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:37.516 [2024-10-08 10:54:57.919980] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:37.516 [2024-10-08 10:54:57.919986] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:37.516 [2024-10-08 10:54:57.919992] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:37.516 [2024-10-08 10:54:57.919999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.516 [2024-10-08 10:54:57.920007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:37.516 [2024-10-08 10:54:57.920015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.861 ms 00:23:37.516 [2024-10-08 10:54:57.920024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.517 [2024-10-08 10:54:57.921394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.517 [2024-10-08 10:54:57.921426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:37.517 [2024-10-08 10:54:57.921435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.343 ms 00:23:37.517 [2024-10-08 10:54:57.921443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.517 [2024-10-08 10:54:57.921518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.517 [2024-10-08 10:54:57.921530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:37.517 [2024-10-08 10:54:57.921538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:23:37.517 [2024-10-08 10:54:57.921549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.517 [2024-10-08 10:54:57.925905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.517 [2024-10-08 10:54:57.925931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:37.517 [2024-10-08 10:54:57.925940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.517 [2024-10-08 10:54:57.925951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.517 [2024-10-08 10:54:57.926000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.517 [2024-10-08 10:54:57.926011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:37.517 [2024-10-08 10:54:57.926021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.517 [2024-10-08 10:54:57.926028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.517 [2024-10-08 10:54:57.926062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.517 [2024-10-08 10:54:57.926071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:37.517 [2024-10-08 10:54:57.926079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.517 [2024-10-08 10:54:57.926090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.517 [2024-10-08 10:54:57.926108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.517 [2024-10-08 10:54:57.926116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:37.517 [2024-10-08 10:54:57.926125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.517 [2024-10-08 10:54:57.926132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.517 [2024-10-08 10:54:57.934488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.517 [2024-10-08 10:54:57.934526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:37.517 [2024-10-08 10:54:57.934536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.517 [2024-10-08 10:54:57.934543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.517 [2024-10-08 10:54:57.941304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.517 [2024-10-08 10:54:57.941342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:37.517 [2024-10-08 10:54:57.941356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.517 [2024-10-08 10:54:57.941364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.517 [2024-10-08 10:54:57.941412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.517 [2024-10-08 10:54:57.941420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:37.517 [2024-10-08 10:54:57.941428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.517 [2024-10-08 10:54:57.941436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.517 [2024-10-08 10:54:57.941460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.517 [2024-10-08 10:54:57.941468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:37.517 [2024-10-08 10:54:57.941476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.517 [2024-10-08 10:54:57.941485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.517 [2024-10-08 10:54:57.941551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.517 [2024-10-08 10:54:57.941561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:37.517 [2024-10-08 10:54:57.941569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.517 [2024-10-08 10:54:57.941576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.517 [2024-10-08 10:54:57.941601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.517 [2024-10-08 10:54:57.941609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:37.517 [2024-10-08 10:54:57.941617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.517 [2024-10-08 10:54:57.941623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.517 [2024-10-08 10:54:57.941670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.517 [2024-10-08 10:54:57.941679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:37.517 [2024-10-08 10:54:57.941687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.517 [2024-10-08 10:54:57.941694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.517 [2024-10-08 10:54:57.941735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.517 [2024-10-08 10:54:57.941744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:37.517 [2024-10-08 10:54:57.941752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.517 [2024-10-08 10:54:57.941761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.517 [2024-10-08 10:54:57.941946] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 212.123 ms, result 0 00:23:38.088 00:23:38.088 00:23:38.088 10:54:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:23:40.633 10:55:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:40.633 [2024-10-08 10:55:00.835775] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:23:40.633 [2024-10-08 10:55:00.835899] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91808 ] 00:23:40.633 [2024-10-08 10:55:00.963754] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:40.633 [2024-10-08 10:55:00.986330] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.633 [2024-10-08 10:55:01.019482] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.633 [2024-10-08 10:55:01.106544] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:40.633 [2024-10-08 10:55:01.106614] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:40.895 [2024-10-08 10:55:01.264334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.895 [2024-10-08 10:55:01.264381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:40.895 [2024-10-08 10:55:01.264400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:40.895 [2024-10-08 10:55:01.264408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.895 [2024-10-08 10:55:01.264456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.895 [2024-10-08 10:55:01.264466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:40.895 [2024-10-08 10:55:01.264475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:23:40.895 [2024-10-08 10:55:01.264482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.895 [2024-10-08 10:55:01.264503] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:40.895 [2024-10-08 10:55:01.265189] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:40.895 [2024-10-08 10:55:01.265276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.895 [2024-10-08 10:55:01.265302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:40.895 [2024-10-08 10:55:01.265328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.777 ms 00:23:40.895 [2024-10-08 10:55:01.265357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.895 [2024-10-08 10:55:01.267303] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:40.895 [2024-10-08 10:55:01.271457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.895 [2024-10-08 10:55:01.271534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:40.895 [2024-10-08 10:55:01.271559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.164 ms 00:23:40.895 [2024-10-08 10:55:01.271579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.896 [2024-10-08 10:55:01.271716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.896 [2024-10-08 10:55:01.271743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:40.896 [2024-10-08 10:55:01.271765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:40.896 [2024-10-08 10:55:01.271818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.896 [2024-10-08 10:55:01.278648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.896 [2024-10-08 10:55:01.278677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:40.896 [2024-10-08 10:55:01.278687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.735 ms 00:23:40.896 [2024-10-08 10:55:01.278696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.896 [2024-10-08 10:55:01.278763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.896 [2024-10-08 10:55:01.278772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:40.896 [2024-10-08 10:55:01.278780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:23:40.896 [2024-10-08 10:55:01.278787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.896 [2024-10-08 10:55:01.278836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.896 [2024-10-08 10:55:01.278847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:40.896 [2024-10-08 10:55:01.278859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:40.896 [2024-10-08 10:55:01.278867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.896 [2024-10-08 10:55:01.278908] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:40.896 [2024-10-08 10:55:01.280230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.896 [2024-10-08 10:55:01.280255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:40.896 [2024-10-08 10:55:01.280263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.331 ms 00:23:40.896 [2024-10-08 10:55:01.280270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.896 [2024-10-08 10:55:01.280298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.896 [2024-10-08 10:55:01.280306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:40.896 [2024-10-08 10:55:01.280313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:40.896 [2024-10-08 10:55:01.280320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.896 [2024-10-08 10:55:01.280345] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:40.896 [2024-10-08 10:55:01.280365] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:40.896 [2024-10-08 10:55:01.280398] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:40.896 [2024-10-08 10:55:01.280412] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:40.896 [2024-10-08 10:55:01.280513] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:40.896 [2024-10-08 10:55:01.280527] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:40.896 [2024-10-08 10:55:01.280537] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:40.896 [2024-10-08 10:55:01.280555] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:40.896 [2024-10-08 10:55:01.280564] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:40.896 [2024-10-08 10:55:01.280575] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:40.896 [2024-10-08 10:55:01.280585] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:40.896 [2024-10-08 10:55:01.280592] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:40.896 [2024-10-08 10:55:01.280603] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:40.896 [2024-10-08 10:55:01.280610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.896 [2024-10-08 10:55:01.280617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:40.896 [2024-10-08 10:55:01.280629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:23:40.896 [2024-10-08 10:55:01.280636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.896 [2024-10-08 10:55:01.280720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.896 [2024-10-08 10:55:01.280735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:40.896 [2024-10-08 10:55:01.280742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:23:40.896 [2024-10-08 10:55:01.280753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.896 [2024-10-08 10:55:01.280866] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:40.896 [2024-10-08 10:55:01.280886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:40.896 [2024-10-08 10:55:01.280899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:40.896 [2024-10-08 10:55:01.280908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.896 [2024-10-08 10:55:01.280916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:40.896 [2024-10-08 10:55:01.280924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:40.896 [2024-10-08 10:55:01.280932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:40.896 [2024-10-08 10:55:01.280940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:40.896 [2024-10-08 10:55:01.280952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:40.896 [2024-10-08 10:55:01.280959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:40.896 [2024-10-08 10:55:01.280967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:40.896 [2024-10-08 10:55:01.280975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:40.896 [2024-10-08 10:55:01.280983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:40.896 [2024-10-08 10:55:01.280990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:40.896 [2024-10-08 10:55:01.280999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:40.896 [2024-10-08 10:55:01.281006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.896 [2024-10-08 10:55:01.281014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:40.896 [2024-10-08 10:55:01.281022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:40.896 [2024-10-08 10:55:01.281031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.896 [2024-10-08 10:55:01.281039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:40.896 [2024-10-08 10:55:01.281047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:40.896 [2024-10-08 10:55:01.281055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.896 [2024-10-08 10:55:01.281062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:40.896 [2024-10-08 10:55:01.281069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:40.896 [2024-10-08 10:55:01.281077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.896 [2024-10-08 10:55:01.281084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:40.896 [2024-10-08 10:55:01.281091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:40.896 [2024-10-08 10:55:01.281099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.896 [2024-10-08 10:55:01.281107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:40.896 [2024-10-08 10:55:01.281114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:40.896 [2024-10-08 10:55:01.281121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.896 [2024-10-08 10:55:01.281129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:40.896 [2024-10-08 10:55:01.281136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:40.896 [2024-10-08 10:55:01.281143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:40.896 [2024-10-08 10:55:01.281152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:40.896 [2024-10-08 10:55:01.281160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:40.896 [2024-10-08 10:55:01.281167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:40.896 [2024-10-08 10:55:01.281174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:40.896 [2024-10-08 10:55:01.281182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:40.896 [2024-10-08 10:55:01.281189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.896 [2024-10-08 10:55:01.281196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:40.896 [2024-10-08 10:55:01.281204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:40.896 [2024-10-08 10:55:01.281211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.896 [2024-10-08 10:55:01.281218] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:40.896 [2024-10-08 10:55:01.281227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:40.896 [2024-10-08 10:55:01.281237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:40.896 [2024-10-08 10:55:01.281249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.896 [2024-10-08 10:55:01.281258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:40.896 [2024-10-08 10:55:01.281265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:40.896 [2024-10-08 10:55:01.281273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:40.896 [2024-10-08 10:55:01.281283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:40.896 [2024-10-08 10:55:01.281290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:40.896 [2024-10-08 10:55:01.281298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:40.896 [2024-10-08 10:55:01.281306] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:40.896 [2024-10-08 10:55:01.281317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:40.896 [2024-10-08 10:55:01.281326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:40.896 [2024-10-08 10:55:01.281334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:40.896 [2024-10-08 10:55:01.281342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:40.896 [2024-10-08 10:55:01.281349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:40.896 [2024-10-08 10:55:01.281356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:40.896 [2024-10-08 10:55:01.281363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:40.897 [2024-10-08 10:55:01.281370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:40.897 [2024-10-08 10:55:01.281377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:40.897 [2024-10-08 10:55:01.281383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:40.897 [2024-10-08 10:55:01.281391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:40.897 [2024-10-08 10:55:01.281397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:40.897 [2024-10-08 10:55:01.281406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:40.897 [2024-10-08 10:55:01.281413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:40.897 [2024-10-08 10:55:01.281420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:40.897 [2024-10-08 10:55:01.281427] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:40.897 [2024-10-08 10:55:01.281435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:40.897 [2024-10-08 10:55:01.281443] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:40.897 [2024-10-08 10:55:01.281450] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:40.897 [2024-10-08 10:55:01.281457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:40.897 [2024-10-08 10:55:01.281464] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:40.897 [2024-10-08 10:55:01.281471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.897 [2024-10-08 10:55:01.281479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:40.897 [2024-10-08 10:55:01.281487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.688 ms 00:23:40.897 [2024-10-08 10:55:01.281494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.897 [2024-10-08 10:55:01.297744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.897 [2024-10-08 10:55:01.297784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:40.897 [2024-10-08 10:55:01.297816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.206 ms 00:23:40.897 [2024-10-08 10:55:01.297824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.897 [2024-10-08 10:55:01.297909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.897 [2024-10-08 10:55:01.297918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:40.897 [2024-10-08 10:55:01.297926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:23:40.897 [2024-10-08 10:55:01.297933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.897 [2024-10-08 10:55:01.306143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.897 [2024-10-08 10:55:01.306178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:40.897 [2024-10-08 10:55:01.306196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.154 ms 00:23:40.897 [2024-10-08 10:55:01.306204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.897 [2024-10-08 10:55:01.306233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.897 [2024-10-08 10:55:01.306243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:40.897 [2024-10-08 10:55:01.306252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:40.897 [2024-10-08 10:55:01.306260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.897 [2024-10-08 10:55:01.306592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.897 [2024-10-08 10:55:01.306616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:40.897 [2024-10-08 10:55:01.306626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:23:40.897 [2024-10-08 10:55:01.306635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.897 [2024-10-08 10:55:01.306768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.897 [2024-10-08 10:55:01.306777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:40.897 [2024-10-08 10:55:01.306807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:23:40.897 [2024-10-08 10:55:01.306817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.897 [2024-10-08 10:55:01.311520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.897 [2024-10-08 10:55:01.311557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:40.897 [2024-10-08 10:55:01.311567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.681 ms 00:23:40.897 [2024-10-08 10:55:01.311576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.897 [2024-10-08 10:55:01.314327] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:40.897 [2024-10-08 10:55:01.314361] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:40.897 [2024-10-08 10:55:01.314380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.897 [2024-10-08 10:55:01.314387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:40.897 [2024-10-08 10:55:01.314395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.710 ms 00:23:40.897 [2024-10-08 10:55:01.314406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.897 [2024-10-08 10:55:01.328962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.897 [2024-10-08 10:55:01.328999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:40.897 [2024-10-08 10:55:01.329009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.520 ms 00:23:40.897 [2024-10-08 10:55:01.329016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.897 [2024-10-08 10:55:01.330917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.897 [2024-10-08 10:55:01.330945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:40.897 [2024-10-08 10:55:01.330958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.865 ms 00:23:40.897 [2024-10-08 10:55:01.330965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.897 [2024-10-08 10:55:01.332758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.897 [2024-10-08 10:55:01.332790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:40.897 [2024-10-08 10:55:01.332809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.763 ms 00:23:40.897 [2024-10-08 10:55:01.332816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.897 [2024-10-08 10:55:01.333121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.897 [2024-10-08 10:55:01.333132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:40.897 [2024-10-08 10:55:01.333141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:23:40.897 [2024-10-08 10:55:01.333148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.897 [2024-10-08 10:55:01.348095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.897 [2024-10-08 10:55:01.348149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:40.897 [2024-10-08 10:55:01.348161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.932 ms 00:23:40.897 [2024-10-08 10:55:01.348170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.897 [2024-10-08 10:55:01.355539] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:40.897 [2024-10-08 10:55:01.357729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.897 [2024-10-08 10:55:01.357757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:40.897 [2024-10-08 10:55:01.357774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.524 ms 00:23:40.897 [2024-10-08 10:55:01.357782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.897 [2024-10-08 10:55:01.357840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.897 [2024-10-08 10:55:01.357851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:40.897 [2024-10-08 10:55:01.357866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:40.897 [2024-10-08 10:55:01.357874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.897 [2024-10-08 10:55:01.359235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.897 [2024-10-08 10:55:01.359265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:40.897 [2024-10-08 10:55:01.359275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.303 ms 00:23:40.897 [2024-10-08 10:55:01.359285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.897 [2024-10-08 10:55:01.359310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.897 [2024-10-08 10:55:01.359318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:40.897 [2024-10-08 10:55:01.359326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:40.897 [2024-10-08 10:55:01.359334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.897 [2024-10-08 10:55:01.359365] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:40.897 [2024-10-08 10:55:01.359375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.897 [2024-10-08 10:55:01.359382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:40.897 [2024-10-08 10:55:01.359390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:40.897 [2024-10-08 10:55:01.359400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.897 [2024-10-08 10:55:01.363019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.897 [2024-10-08 10:55:01.363051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:40.897 [2024-10-08 10:55:01.363060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.602 ms 00:23:40.897 [2024-10-08 10:55:01.363068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.897 [2024-10-08 10:55:01.363134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.897 [2024-10-08 10:55:01.363143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:40.897 [2024-10-08 10:55:01.363156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:40.897 [2024-10-08 10:55:01.363167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.897 [2024-10-08 10:55:01.364027] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 99.302 ms, result 0 00:23:42.284  [2024-10-08T10:55:03.821Z] Copying: 1104/1048576 [kB] (1104 kBps) [2024-10-08T10:55:04.764Z] Copying: 4724/1048576 [kB] (3620 kBps) [2024-10-08T10:55:05.709Z] Copying: 17/1024 [MB] (12 MBps) [2024-10-08T10:55:06.653Z] Copying: 33/1024 [MB] (16 MBps) [2024-10-08T10:55:07.595Z] Copying: 51/1024 [MB] (17 MBps) [2024-10-08T10:55:08.546Z] Copying: 68/1024 [MB] (16 MBps) [2024-10-08T10:55:09.931Z] Copying: 84/1024 [MB] (16 MBps) [2024-10-08T10:55:10.875Z] Copying: 100/1024 [MB] (15 MBps) [2024-10-08T10:55:11.818Z] Copying: 115/1024 [MB] (14 MBps) [2024-10-08T10:55:12.761Z] Copying: 130/1024 [MB] (14 MBps) [2024-10-08T10:55:13.706Z] Copying: 145/1024 [MB] (15 MBps) [2024-10-08T10:55:14.649Z] Copying: 159/1024 [MB] (14 MBps) [2024-10-08T10:55:15.592Z] Copying: 175/1024 [MB] (15 MBps) [2024-10-08T10:55:16.979Z] Copying: 192/1024 [MB] (17 MBps) [2024-10-08T10:55:17.552Z] Copying: 208/1024 [MB] (16 MBps) [2024-10-08T10:55:18.937Z] Copying: 225/1024 [MB] (16 MBps) [2024-10-08T10:55:19.880Z] Copying: 241/1024 [MB] (16 MBps) [2024-10-08T10:55:20.822Z] Copying: 257/1024 [MB] (16 MBps) [2024-10-08T10:55:21.764Z] Copying: 274/1024 [MB] (16 MBps) [2024-10-08T10:55:22.714Z] Copying: 291/1024 [MB] (16 MBps) [2024-10-08T10:55:23.709Z] Copying: 307/1024 [MB] (16 MBps) [2024-10-08T10:55:24.651Z] Copying: 324/1024 [MB] (16 MBps) [2024-10-08T10:55:25.595Z] Copying: 343/1024 [MB] (18 MBps) [2024-10-08T10:55:26.982Z] Copying: 359/1024 [MB] (16 MBps) [2024-10-08T10:55:27.566Z] Copying: 375/1024 [MB] (16 MBps) [2024-10-08T10:55:28.960Z] Copying: 394/1024 [MB] (18 MBps) [2024-10-08T10:55:29.902Z] Copying: 413/1024 [MB] (18 MBps) [2024-10-08T10:55:30.846Z] Copying: 429/1024 [MB] (15 MBps) [2024-10-08T10:55:31.848Z] Copying: 444/1024 [MB] (15 MBps) [2024-10-08T10:55:32.794Z] Copying: 460/1024 [MB] (15 MBps) [2024-10-08T10:55:33.738Z] Copying: 475/1024 [MB] (15 MBps) [2024-10-08T10:55:34.681Z] Copying: 491/1024 [MB] (15 MBps) [2024-10-08T10:55:35.625Z] Copying: 508/1024 [MB] (17 MBps) [2024-10-08T10:55:36.586Z] Copying: 525/1024 [MB] (17 MBps) [2024-10-08T10:55:37.553Z] Copying: 541/1024 [MB] (15 MBps) [2024-10-08T10:55:38.939Z] Copying: 556/1024 [MB] (14 MBps) [2024-10-08T10:55:39.883Z] Copying: 569/1024 [MB] (13 MBps) [2024-10-08T10:55:40.825Z] Copying: 584/1024 [MB] (14 MBps) [2024-10-08T10:55:41.768Z] Copying: 598/1024 [MB] (14 MBps) [2024-10-08T10:55:42.716Z] Copying: 612/1024 [MB] (14 MBps) [2024-10-08T10:55:43.661Z] Copying: 628/1024 [MB] (15 MBps) [2024-10-08T10:55:44.607Z] Copying: 644/1024 [MB] (16 MBps) [2024-10-08T10:55:45.553Z] Copying: 660/1024 [MB] (16 MBps) [2024-10-08T10:55:46.941Z] Copying: 677/1024 [MB] (16 MBps) [2024-10-08T10:55:47.882Z] Copying: 692/1024 [MB] (15 MBps) [2024-10-08T10:55:48.826Z] Copying: 707/1024 [MB] (15 MBps) [2024-10-08T10:55:49.836Z] Copying: 722/1024 [MB] (15 MBps) [2024-10-08T10:55:50.778Z] Copying: 738/1024 [MB] (15 MBps) [2024-10-08T10:55:51.723Z] Copying: 754/1024 [MB] (15 MBps) [2024-10-08T10:55:52.666Z] Copying: 770/1024 [MB] (16 MBps) [2024-10-08T10:55:53.608Z] Copying: 786/1024 [MB] (15 MBps) [2024-10-08T10:55:54.552Z] Copying: 802/1024 [MB] (16 MBps) [2024-10-08T10:55:55.938Z] Copying: 820/1024 [MB] (17 MBps) [2024-10-08T10:55:56.908Z] Copying: 836/1024 [MB] (16 MBps) [2024-10-08T10:55:57.850Z] Copying: 850/1024 [MB] (13 MBps) [2024-10-08T10:55:58.792Z] Copying: 866/1024 [MB] (15 MBps) [2024-10-08T10:55:59.734Z] Copying: 883/1024 [MB] (17 MBps) [2024-10-08T10:56:00.675Z] Copying: 899/1024 [MB] (15 MBps) [2024-10-08T10:56:01.616Z] Copying: 919/1024 [MB] (20 MBps) [2024-10-08T10:56:02.554Z] Copying: 938/1024 [MB] (19 MBps) [2024-10-08T10:56:03.943Z] Copying: 956/1024 [MB] (17 MBps) [2024-10-08T10:56:04.886Z] Copying: 972/1024 [MB] (16 MBps) [2024-10-08T10:56:05.829Z] Copying: 989/1024 [MB] (16 MBps) [2024-10-08T10:56:06.772Z] Copying: 1006/1024 [MB] (16 MBps) [2024-10-08T10:56:06.772Z] Copying: 1022/1024 [MB] (16 MBps) [2024-10-08T10:56:07.037Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-10-08 10:56:06.874971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.460 [2024-10-08 10:56:06.875050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:46.460 [2024-10-08 10:56:06.875073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:46.460 [2024-10-08 10:56:06.875088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.460 [2024-10-08 10:56:06.875126] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:46.460 [2024-10-08 10:56:06.875912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.460 [2024-10-08 10:56:06.875947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:46.460 [2024-10-08 10:56:06.875969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.761 ms 00:24:46.460 [2024-10-08 10:56:06.875984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.460 [2024-10-08 10:56:06.876482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.460 [2024-10-08 10:56:06.876518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:46.460 [2024-10-08 10:56:06.876535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:24:46.460 [2024-10-08 10:56:06.876550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.460 [2024-10-08 10:56:06.891429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.460 [2024-10-08 10:56:06.891478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:46.460 [2024-10-08 10:56:06.891489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.852 ms 00:24:46.460 [2024-10-08 10:56:06.891502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.460 [2024-10-08 10:56:06.897682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.460 [2024-10-08 10:56:06.897709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:46.460 [2024-10-08 10:56:06.897740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.152 ms 00:24:46.460 [2024-10-08 10:56:06.897748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.460 [2024-10-08 10:56:06.899907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.460 [2024-10-08 10:56:06.899940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:46.460 [2024-10-08 10:56:06.899949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.112 ms 00:24:46.460 [2024-10-08 10:56:06.899956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.460 [2024-10-08 10:56:06.903438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.460 [2024-10-08 10:56:06.903479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:46.460 [2024-10-08 10:56:06.903488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.455 ms 00:24:46.460 [2024-10-08 10:56:06.903498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.460 [2024-10-08 10:56:06.907450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.460 [2024-10-08 10:56:06.907486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:46.460 [2024-10-08 10:56:06.907496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.921 ms 00:24:46.460 [2024-10-08 10:56:06.907504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.460 [2024-10-08 10:56:06.909586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.460 [2024-10-08 10:56:06.909615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:46.460 [2024-10-08 10:56:06.909623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.067 ms 00:24:46.460 [2024-10-08 10:56:06.909630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.460 [2024-10-08 10:56:06.911873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.460 [2024-10-08 10:56:06.911902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:46.460 [2024-10-08 10:56:06.911910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.217 ms 00:24:46.460 [2024-10-08 10:56:06.911917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.460 [2024-10-08 10:56:06.913541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.460 [2024-10-08 10:56:06.913573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:46.460 [2024-10-08 10:56:06.913590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.597 ms 00:24:46.460 [2024-10-08 10:56:06.913597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.460 [2024-10-08 10:56:06.915070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.460 [2024-10-08 10:56:06.915098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:46.460 [2024-10-08 10:56:06.915107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.424 ms 00:24:46.460 [2024-10-08 10:56:06.915113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.460 [2024-10-08 10:56:06.915139] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:46.460 [2024-10-08 10:56:06.915152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:46.460 [2024-10-08 10:56:06.915162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:24:46.460 [2024-10-08 10:56:06.915170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:46.460 [2024-10-08 10:56:06.915462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:46.461 [2024-10-08 10:56:06.915904] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:46.461 [2024-10-08 10:56:06.915912] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1ad8d3e5-9eb8-4391-a93c-2da83e17e49e 00:24:46.461 [2024-10-08 10:56:06.915928] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:24:46.461 [2024-10-08 10:56:06.915938] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 159680 00:24:46.461 [2024-10-08 10:56:06.915945] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 157696 00:24:46.461 [2024-10-08 10:56:06.915952] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0126 00:24:46.461 [2024-10-08 10:56:06.915964] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:46.461 [2024-10-08 10:56:06.915972] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:46.461 [2024-10-08 10:56:06.915978] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:46.461 [2024-10-08 10:56:06.915985] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:46.461 [2024-10-08 10:56:06.915991] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:46.461 [2024-10-08 10:56:06.915998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.461 [2024-10-08 10:56:06.916005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:46.461 [2024-10-08 10:56:06.916013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.860 ms 00:24:46.461 [2024-10-08 10:56:06.916020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.461 [2024-10-08 10:56:06.917432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.461 [2024-10-08 10:56:06.917456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:46.461 [2024-10-08 10:56:06.917466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.398 ms 00:24:46.461 [2024-10-08 10:56:06.917473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.461 [2024-10-08 10:56:06.917545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.461 [2024-10-08 10:56:06.917553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:46.461 [2024-10-08 10:56:06.917567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:24:46.461 [2024-10-08 10:56:06.917576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.461 [2024-10-08 10:56:06.921921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.461 [2024-10-08 10:56:06.921951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:46.461 [2024-10-08 10:56:06.921960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.461 [2024-10-08 10:56:06.921968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.461 [2024-10-08 10:56:06.922012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.461 [2024-10-08 10:56:06.922020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:46.461 [2024-10-08 10:56:06.922027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.461 [2024-10-08 10:56:06.922037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.461 [2024-10-08 10:56:06.922082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.461 [2024-10-08 10:56:06.922091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:46.461 [2024-10-08 10:56:06.922104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.461 [2024-10-08 10:56:06.922112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.461 [2024-10-08 10:56:06.922127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.461 [2024-10-08 10:56:06.922134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:46.461 [2024-10-08 10:56:06.922142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.461 [2024-10-08 10:56:06.922149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.461 [2024-10-08 10:56:06.930486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.461 [2024-10-08 10:56:06.930524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:46.461 [2024-10-08 10:56:06.930537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.461 [2024-10-08 10:56:06.930545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.461 [2024-10-08 10:56:06.937300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.461 [2024-10-08 10:56:06.937336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:46.461 [2024-10-08 10:56:06.937346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.462 [2024-10-08 10:56:06.937360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.462 [2024-10-08 10:56:06.937383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.462 [2024-10-08 10:56:06.937395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:46.462 [2024-10-08 10:56:06.937402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.462 [2024-10-08 10:56:06.937410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.462 [2024-10-08 10:56:06.937450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.462 [2024-10-08 10:56:06.937458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:46.462 [2024-10-08 10:56:06.937466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.462 [2024-10-08 10:56:06.937477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.462 [2024-10-08 10:56:06.937635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.462 [2024-10-08 10:56:06.937644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:46.462 [2024-10-08 10:56:06.937652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.462 [2024-10-08 10:56:06.937660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.462 [2024-10-08 10:56:06.937684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.462 [2024-10-08 10:56:06.937693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:46.462 [2024-10-08 10:56:06.937701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.462 [2024-10-08 10:56:06.937708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.462 [2024-10-08 10:56:06.937751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.462 [2024-10-08 10:56:06.937764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:46.462 [2024-10-08 10:56:06.937772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.462 [2024-10-08 10:56:06.937779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.462 [2024-10-08 10:56:06.937879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.462 [2024-10-08 10:56:06.937895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:46.462 [2024-10-08 10:56:06.937904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.462 [2024-10-08 10:56:06.937911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.462 [2024-10-08 10:56:06.938027] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 63.044 ms, result 0 00:24:46.723 00:24:46.723 00:24:46.723 10:56:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:49.271 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:49.271 10:56:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:49.271 [2024-10-08 10:56:09.330339] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:24:49.271 [2024-10-08 10:56:09.330458] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92505 ] 00:24:49.271 [2024-10-08 10:56:09.458689] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:49.271 [2024-10-08 10:56:09.481345] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.271 [2024-10-08 10:56:09.515214] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:49.271 [2024-10-08 10:56:09.603083] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:49.271 [2024-10-08 10:56:09.603150] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:49.271 [2024-10-08 10:56:09.761395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.271 [2024-10-08 10:56:09.761463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:49.271 [2024-10-08 10:56:09.761480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:49.271 [2024-10-08 10:56:09.761493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.271 [2024-10-08 10:56:09.761547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.271 [2024-10-08 10:56:09.761557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:49.271 [2024-10-08 10:56:09.761565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:24:49.271 [2024-10-08 10:56:09.761572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.271 [2024-10-08 10:56:09.761595] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:49.271 [2024-10-08 10:56:09.761978] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:49.271 [2024-10-08 10:56:09.762007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.271 [2024-10-08 10:56:09.762015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:49.271 [2024-10-08 10:56:09.762024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:24:49.271 [2024-10-08 10:56:09.762035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.271 [2024-10-08 10:56:09.763140] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:49.271 [2024-10-08 10:56:09.765710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.271 [2024-10-08 10:56:09.765754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:49.272 [2024-10-08 10:56:09.765765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.573 ms 00:24:49.272 [2024-10-08 10:56:09.765772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.272 [2024-10-08 10:56:09.765844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.272 [2024-10-08 10:56:09.765855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:49.272 [2024-10-08 10:56:09.765863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:49.272 [2024-10-08 10:56:09.765870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.272 [2024-10-08 10:56:09.770699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.272 [2024-10-08 10:56:09.770732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:49.272 [2024-10-08 10:56:09.770742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.770 ms 00:24:49.272 [2024-10-08 10:56:09.770752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.272 [2024-10-08 10:56:09.770842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.272 [2024-10-08 10:56:09.770851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:49.272 [2024-10-08 10:56:09.770865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:49.272 [2024-10-08 10:56:09.770873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.272 [2024-10-08 10:56:09.770911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.272 [2024-10-08 10:56:09.770920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:49.272 [2024-10-08 10:56:09.770929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:49.272 [2024-10-08 10:56:09.770936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.272 [2024-10-08 10:56:09.770962] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:49.272 [2024-10-08 10:56:09.772282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.272 [2024-10-08 10:56:09.772308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:49.272 [2024-10-08 10:56:09.772322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.326 ms 00:24:49.272 [2024-10-08 10:56:09.772330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.272 [2024-10-08 10:56:09.772362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.272 [2024-10-08 10:56:09.772370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:49.272 [2024-10-08 10:56:09.772377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:49.272 [2024-10-08 10:56:09.772385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.272 [2024-10-08 10:56:09.772412] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:49.272 [2024-10-08 10:56:09.772430] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:49.272 [2024-10-08 10:56:09.772471] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:49.272 [2024-10-08 10:56:09.772490] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:49.272 [2024-10-08 10:56:09.772593] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:49.272 [2024-10-08 10:56:09.772603] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:49.272 [2024-10-08 10:56:09.772613] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:49.272 [2024-10-08 10:56:09.772629] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:49.272 [2024-10-08 10:56:09.772638] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:49.272 [2024-10-08 10:56:09.772646] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:49.272 [2024-10-08 10:56:09.772653] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:49.272 [2024-10-08 10:56:09.772660] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:49.272 [2024-10-08 10:56:09.772667] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:49.272 [2024-10-08 10:56:09.772675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.272 [2024-10-08 10:56:09.772685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:49.272 [2024-10-08 10:56:09.772693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:24:49.272 [2024-10-08 10:56:09.772701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.272 [2024-10-08 10:56:09.772783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.272 [2024-10-08 10:56:09.772819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:49.272 [2024-10-08 10:56:09.772828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:24:49.272 [2024-10-08 10:56:09.772834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.272 [2024-10-08 10:56:09.772931] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:49.272 [2024-10-08 10:56:09.772945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:49.272 [2024-10-08 10:56:09.772955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:49.272 [2024-10-08 10:56:09.772970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.272 [2024-10-08 10:56:09.772979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:49.272 [2024-10-08 10:56:09.772986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:49.272 [2024-10-08 10:56:09.772995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:49.272 [2024-10-08 10:56:09.773003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:49.272 [2024-10-08 10:56:09.773016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:49.272 [2024-10-08 10:56:09.773023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:49.272 [2024-10-08 10:56:09.773033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:49.272 [2024-10-08 10:56:09.773042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:49.272 [2024-10-08 10:56:09.773050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:49.272 [2024-10-08 10:56:09.773058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:49.272 [2024-10-08 10:56:09.773066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:49.272 [2024-10-08 10:56:09.773073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.272 [2024-10-08 10:56:09.773080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:49.272 [2024-10-08 10:56:09.773088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:49.272 [2024-10-08 10:56:09.773095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.272 [2024-10-08 10:56:09.773103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:49.272 [2024-10-08 10:56:09.773110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:49.272 [2024-10-08 10:56:09.773118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:49.272 [2024-10-08 10:56:09.773126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:49.272 [2024-10-08 10:56:09.773133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:49.272 [2024-10-08 10:56:09.773140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:49.272 [2024-10-08 10:56:09.773147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:49.272 [2024-10-08 10:56:09.773160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:49.272 [2024-10-08 10:56:09.773167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:49.272 [2024-10-08 10:56:09.773174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:49.272 [2024-10-08 10:56:09.773182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:49.272 [2024-10-08 10:56:09.773189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:49.272 [2024-10-08 10:56:09.773197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:49.272 [2024-10-08 10:56:09.773204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:49.272 [2024-10-08 10:56:09.773211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:49.272 [2024-10-08 10:56:09.773219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:49.272 [2024-10-08 10:56:09.773226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:49.272 [2024-10-08 10:56:09.773233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:49.272 [2024-10-08 10:56:09.773241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:49.272 [2024-10-08 10:56:09.773248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:49.272 [2024-10-08 10:56:09.773256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.272 [2024-10-08 10:56:09.773263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:49.272 [2024-10-08 10:56:09.773272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:49.272 [2024-10-08 10:56:09.773281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.272 [2024-10-08 10:56:09.773290] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:49.272 [2024-10-08 10:56:09.773298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:49.272 [2024-10-08 10:56:09.773308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:49.272 [2024-10-08 10:56:09.773316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:49.272 [2024-10-08 10:56:09.773325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:49.272 [2024-10-08 10:56:09.773332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:49.272 [2024-10-08 10:56:09.773339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:49.272 [2024-10-08 10:56:09.773354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:49.272 [2024-10-08 10:56:09.773361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:49.273 [2024-10-08 10:56:09.773369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:49.273 [2024-10-08 10:56:09.773378] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:49.273 [2024-10-08 10:56:09.773388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:49.273 [2024-10-08 10:56:09.773398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:49.273 [2024-10-08 10:56:09.773406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:49.273 [2024-10-08 10:56:09.773413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:49.273 [2024-10-08 10:56:09.773422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:49.273 [2024-10-08 10:56:09.773429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:49.273 [2024-10-08 10:56:09.773437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:49.273 [2024-10-08 10:56:09.773444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:49.273 [2024-10-08 10:56:09.773451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:49.273 [2024-10-08 10:56:09.773457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:49.273 [2024-10-08 10:56:09.773465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:49.273 [2024-10-08 10:56:09.773471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:49.273 [2024-10-08 10:56:09.773479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:49.273 [2024-10-08 10:56:09.773485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:49.273 [2024-10-08 10:56:09.773493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:49.273 [2024-10-08 10:56:09.773500] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:49.273 [2024-10-08 10:56:09.773508] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:49.273 [2024-10-08 10:56:09.773515] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:49.273 [2024-10-08 10:56:09.773522] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:49.273 [2024-10-08 10:56:09.773530] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:49.273 [2024-10-08 10:56:09.773540] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:49.273 [2024-10-08 10:56:09.773547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.273 [2024-10-08 10:56:09.773555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:49.273 [2024-10-08 10:56:09.773563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.685 ms 00:24:49.273 [2024-10-08 10:56:09.773570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.273 [2024-10-08 10:56:09.792906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.273 [2024-10-08 10:56:09.792950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:49.273 [2024-10-08 10:56:09.792964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.289 ms 00:24:49.273 [2024-10-08 10:56:09.792973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.273 [2024-10-08 10:56:09.793070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.273 [2024-10-08 10:56:09.793080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:49.273 [2024-10-08 10:56:09.793090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:49.273 [2024-10-08 10:56:09.793105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.273 [2024-10-08 10:56:09.801439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.273 [2024-10-08 10:56:09.801474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:49.273 [2024-10-08 10:56:09.801485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.273 ms 00:24:49.273 [2024-10-08 10:56:09.801493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.273 [2024-10-08 10:56:09.801522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.273 [2024-10-08 10:56:09.801531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:49.273 [2024-10-08 10:56:09.801546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:49.273 [2024-10-08 10:56:09.801557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.273 [2024-10-08 10:56:09.801929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.273 [2024-10-08 10:56:09.801952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:49.273 [2024-10-08 10:56:09.801962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:24:49.273 [2024-10-08 10:56:09.801970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.273 [2024-10-08 10:56:09.802106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.273 [2024-10-08 10:56:09.802116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:49.273 [2024-10-08 10:56:09.802126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:24:49.273 [2024-10-08 10:56:09.802135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.273 [2024-10-08 10:56:09.806856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.273 [2024-10-08 10:56:09.806891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:49.273 [2024-10-08 10:56:09.806901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.697 ms 00:24:49.273 [2024-10-08 10:56:09.806909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.273 [2024-10-08 10:56:09.809678] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:49.273 [2024-10-08 10:56:09.809714] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:49.273 [2024-10-08 10:56:09.809748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.273 [2024-10-08 10:56:09.809757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:49.273 [2024-10-08 10:56:09.809773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.747 ms 00:24:49.273 [2024-10-08 10:56:09.809780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.273 [2024-10-08 10:56:09.824298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.273 [2024-10-08 10:56:09.824343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:49.273 [2024-10-08 10:56:09.824354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.466 ms 00:24:49.273 [2024-10-08 10:56:09.824361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.273 [2024-10-08 10:56:09.826158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.273 [2024-10-08 10:56:09.826187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:49.273 [2024-10-08 10:56:09.826195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.761 ms 00:24:49.273 [2024-10-08 10:56:09.826203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.273 [2024-10-08 10:56:09.827813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.273 [2024-10-08 10:56:09.827840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:49.273 [2024-10-08 10:56:09.827848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.580 ms 00:24:49.273 [2024-10-08 10:56:09.827854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.273 [2024-10-08 10:56:09.828170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.273 [2024-10-08 10:56:09.828185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:49.273 [2024-10-08 10:56:09.828193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:24:49.273 [2024-10-08 10:56:09.828201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.273 [2024-10-08 10:56:09.842903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.273 [2024-10-08 10:56:09.842963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:49.273 [2024-10-08 10:56:09.842974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.687 ms 00:24:49.273 [2024-10-08 10:56:09.842982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.534 [2024-10-08 10:56:09.850343] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:49.534 [2024-10-08 10:56:09.852619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.534 [2024-10-08 10:56:09.852648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:49.534 [2024-10-08 10:56:09.852665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.596 ms 00:24:49.534 [2024-10-08 10:56:09.852673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.534 [2024-10-08 10:56:09.852725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.534 [2024-10-08 10:56:09.852739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:49.534 [2024-10-08 10:56:09.852748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:49.534 [2024-10-08 10:56:09.852756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.534 [2024-10-08 10:56:09.853324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.534 [2024-10-08 10:56:09.853354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:49.534 [2024-10-08 10:56:09.853364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:24:49.534 [2024-10-08 10:56:09.853374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.534 [2024-10-08 10:56:09.853397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.534 [2024-10-08 10:56:09.853409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:49.534 [2024-10-08 10:56:09.853420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:49.534 [2024-10-08 10:56:09.853427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.534 [2024-10-08 10:56:09.853486] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:49.534 [2024-10-08 10:56:09.853497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.534 [2024-10-08 10:56:09.853504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:49.534 [2024-10-08 10:56:09.853512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:49.534 [2024-10-08 10:56:09.853521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.534 [2024-10-08 10:56:09.857413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.534 [2024-10-08 10:56:09.857447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:49.534 [2024-10-08 10:56:09.857456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.876 ms 00:24:49.534 [2024-10-08 10:56:09.857469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.534 [2024-10-08 10:56:09.857535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.534 [2024-10-08 10:56:09.857544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:49.534 [2024-10-08 10:56:09.857552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:24:49.534 [2024-10-08 10:56:09.857560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.534 [2024-10-08 10:56:09.858632] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 96.796 ms, result 0 00:24:50.477  [2024-10-08T10:56:12.450Z] Copying: 11/1024 [MB] (11 MBps) [2024-10-08T10:56:13.387Z] Copying: 23/1024 [MB] (11 MBps) [2024-10-08T10:56:14.322Z] Copying: 59/1024 [MB] (35 MBps) [2024-10-08T10:56:15.265Z] Copying: 107/1024 [MB] (48 MBps) [2024-10-08T10:56:16.210Z] Copying: 132/1024 [MB] (24 MBps) [2024-10-08T10:56:17.153Z] Copying: 147/1024 [MB] (14 MBps) [2024-10-08T10:56:18.097Z] Copying: 170/1024 [MB] (23 MBps) [2024-10-08T10:56:19.041Z] Copying: 190/1024 [MB] (19 MBps) [2024-10-08T10:56:20.412Z] Copying: 213/1024 [MB] (23 MBps) [2024-10-08T10:56:21.343Z] Copying: 256/1024 [MB] (43 MBps) [2024-10-08T10:56:22.275Z] Copying: 309/1024 [MB] (52 MBps) [2024-10-08T10:56:23.206Z] Copying: 357/1024 [MB] (48 MBps) [2024-10-08T10:56:24.161Z] Copying: 404/1024 [MB] (47 MBps) [2024-10-08T10:56:25.098Z] Copying: 454/1024 [MB] (49 MBps) [2024-10-08T10:56:26.032Z] Copying: 500/1024 [MB] (46 MBps) [2024-10-08T10:56:27.405Z] Copying: 550/1024 [MB] (50 MBps) [2024-10-08T10:56:28.338Z] Copying: 596/1024 [MB] (46 MBps) [2024-10-08T10:56:29.279Z] Copying: 644/1024 [MB] (47 MBps) [2024-10-08T10:56:30.212Z] Copying: 692/1024 [MB] (48 MBps) [2024-10-08T10:56:31.145Z] Copying: 741/1024 [MB] (48 MBps) [2024-10-08T10:56:32.078Z] Copying: 786/1024 [MB] (45 MBps) [2024-10-08T10:56:33.450Z] Copying: 837/1024 [MB] (50 MBps) [2024-10-08T10:56:34.383Z] Copying: 886/1024 [MB] (49 MBps) [2024-10-08T10:56:35.349Z] Copying: 928/1024 [MB] (42 MBps) [2024-10-08T10:56:36.282Z] Copying: 976/1024 [MB] (47 MBps) [2024-10-08T10:56:36.282Z] Copying: 1024/1024 [MB] (average 39 MBps)[2024-10-08 10:56:36.174207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.705 [2024-10-08 10:56:36.174272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:15.705 [2024-10-08 10:56:36.174287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:15.705 [2024-10-08 10:56:36.174295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.705 [2024-10-08 10:56:36.174321] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:15.705 [2024-10-08 10:56:36.174768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.705 [2024-10-08 10:56:36.174819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:15.705 [2024-10-08 10:56:36.174830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:25:15.705 [2024-10-08 10:56:36.174837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.705 [2024-10-08 10:56:36.175056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.705 [2024-10-08 10:56:36.175072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:15.705 [2024-10-08 10:56:36.175082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:25:15.705 [2024-10-08 10:56:36.175090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.705 [2024-10-08 10:56:36.178983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.705 [2024-10-08 10:56:36.179025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:15.705 [2024-10-08 10:56:36.179039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.875 ms 00:25:15.705 [2024-10-08 10:56:36.179051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.705 [2024-10-08 10:56:36.189652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.705 [2024-10-08 10:56:36.189689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:15.705 [2024-10-08 10:56:36.189704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.578 ms 00:25:15.705 [2024-10-08 10:56:36.189716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.705 [2024-10-08 10:56:36.191394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.705 [2024-10-08 10:56:36.191428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:15.705 [2024-10-08 10:56:36.191437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.563 ms 00:25:15.705 [2024-10-08 10:56:36.191444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.705 [2024-10-08 10:56:36.194557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.705 [2024-10-08 10:56:36.194590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:15.705 [2024-10-08 10:56:36.194599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.094 ms 00:25:15.705 [2024-10-08 10:56:36.194607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.705 [2024-10-08 10:56:36.196249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.705 [2024-10-08 10:56:36.196293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:15.705 [2024-10-08 10:56:36.196305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.609 ms 00:25:15.705 [2024-10-08 10:56:36.196313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.705 [2024-10-08 10:56:36.197733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.705 [2024-10-08 10:56:36.197776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:15.705 [2024-10-08 10:56:36.197785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.404 ms 00:25:15.705 [2024-10-08 10:56:36.197792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.705 [2024-10-08 10:56:36.199059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.705 [2024-10-08 10:56:36.199089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:15.705 [2024-10-08 10:56:36.199097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.239 ms 00:25:15.705 [2024-10-08 10:56:36.199103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.705 [2024-10-08 10:56:36.200215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.705 [2024-10-08 10:56:36.200256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:15.705 [2024-10-08 10:56:36.200265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.095 ms 00:25:15.705 [2024-10-08 10:56:36.200272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.705 [2024-10-08 10:56:36.201022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.705 [2024-10-08 10:56:36.201052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:15.705 [2024-10-08 10:56:36.201061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.713 ms 00:25:15.705 [2024-10-08 10:56:36.201068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.705 [2024-10-08 10:56:36.201084] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:15.705 [2024-10-08 10:56:36.201101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:15.705 [2024-10-08 10:56:36.201112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:25:15.705 [2024-10-08 10:56:36.201120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:15.705 [2024-10-08 10:56:36.201127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:15.705 [2024-10-08 10:56:36.201135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:15.705 [2024-10-08 10:56:36.201143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:15.705 [2024-10-08 10:56:36.201150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:15.705 [2024-10-08 10:56:36.201157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:15.705 [2024-10-08 10:56:36.201164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:15.705 [2024-10-08 10:56:36.201172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:15.705 [2024-10-08 10:56:36.201179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:15.705 [2024-10-08 10:56:36.201186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:15.705 [2024-10-08 10:56:36.201194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:15.705 [2024-10-08 10:56:36.201201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:15.705 [2024-10-08 10:56:36.201208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:15.705 [2024-10-08 10:56:36.201215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:15.705 [2024-10-08 10:56:36.201223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:15.705 [2024-10-08 10:56:36.201230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:15.705 [2024-10-08 10:56:36.201238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:15.706 [2024-10-08 10:56:36.201873] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:15.706 [2024-10-08 10:56:36.201881] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1ad8d3e5-9eb8-4391-a93c-2da83e17e49e 00:25:15.706 [2024-10-08 10:56:36.201888] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:25:15.706 [2024-10-08 10:56:36.201896] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:15.706 [2024-10-08 10:56:36.201903] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:15.706 [2024-10-08 10:56:36.201910] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:15.706 [2024-10-08 10:56:36.201918] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:15.706 [2024-10-08 10:56:36.201925] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:15.706 [2024-10-08 10:56:36.201932] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:15.706 [2024-10-08 10:56:36.201938] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:15.706 [2024-10-08 10:56:36.201944] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:15.706 [2024-10-08 10:56:36.201951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.706 [2024-10-08 10:56:36.201958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:15.706 [2024-10-08 10:56:36.201971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.868 ms 00:25:15.706 [2024-10-08 10:56:36.201978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.706 [2024-10-08 10:56:36.203286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.706 [2024-10-08 10:56:36.203311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:15.707 [2024-10-08 10:56:36.203320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.294 ms 00:25:15.707 [2024-10-08 10:56:36.203327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.707 [2024-10-08 10:56:36.203409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.707 [2024-10-08 10:56:36.203422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:15.707 [2024-10-08 10:56:36.203431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:25:15.707 [2024-10-08 10:56:36.203438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.707 [2024-10-08 10:56:36.209074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.707 [2024-10-08 10:56:36.209110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:15.707 [2024-10-08 10:56:36.209119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.707 [2024-10-08 10:56:36.209126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.707 [2024-10-08 10:56:36.209174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.707 [2024-10-08 10:56:36.209182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:15.707 [2024-10-08 10:56:36.209189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.707 [2024-10-08 10:56:36.209196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.707 [2024-10-08 10:56:36.209230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.707 [2024-10-08 10:56:36.209239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:15.707 [2024-10-08 10:56:36.209246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.707 [2024-10-08 10:56:36.209254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.707 [2024-10-08 10:56:36.209274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.707 [2024-10-08 10:56:36.209284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:15.707 [2024-10-08 10:56:36.209291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.707 [2024-10-08 10:56:36.209299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.707 [2024-10-08 10:56:36.218620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.707 [2024-10-08 10:56:36.218657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:15.707 [2024-10-08 10:56:36.218668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.707 [2024-10-08 10:56:36.218677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.707 [2024-10-08 10:56:36.225237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.707 [2024-10-08 10:56:36.225274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:15.707 [2024-10-08 10:56:36.225284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.707 [2024-10-08 10:56:36.225292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.707 [2024-10-08 10:56:36.225337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.707 [2024-10-08 10:56:36.225346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:15.707 [2024-10-08 10:56:36.225354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.707 [2024-10-08 10:56:36.225361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.707 [2024-10-08 10:56:36.225384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.707 [2024-10-08 10:56:36.225392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:15.707 [2024-10-08 10:56:36.225404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.707 [2024-10-08 10:56:36.225411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.707 [2024-10-08 10:56:36.225469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.707 [2024-10-08 10:56:36.225478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:15.707 [2024-10-08 10:56:36.225486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.707 [2024-10-08 10:56:36.225493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.707 [2024-10-08 10:56:36.225517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.707 [2024-10-08 10:56:36.225531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:15.707 [2024-10-08 10:56:36.225538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.707 [2024-10-08 10:56:36.225630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.707 [2024-10-08 10:56:36.225668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.707 [2024-10-08 10:56:36.225677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:15.707 [2024-10-08 10:56:36.225685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.707 [2024-10-08 10:56:36.225692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.707 [2024-10-08 10:56:36.225733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:15.707 [2024-10-08 10:56:36.225743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:15.707 [2024-10-08 10:56:36.225774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:15.707 [2024-10-08 10:56:36.225782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.707 [2024-10-08 10:56:36.225903] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 51.674 ms, result 0 00:25:15.965 00:25:15.965 00:25:15.965 10:56:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:18.495 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:25:18.495 10:56:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:25:18.495 10:56:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:25:18.495 10:56:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:18.495 10:56:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:18.495 10:56:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:18.495 10:56:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:18.495 10:56:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:18.495 Process with pid 90638 is not found 00:25:18.495 10:56:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 90638 00:25:18.495 10:56:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 90638 ']' 00:25:18.495 10:56:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 90638 00:25:18.495 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (90638) - No such process 00:25:18.495 10:56:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 90638 is not found' 00:25:18.495 10:56:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:25:18.495 Remove shared memory files 00:25:18.495 10:56:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:25:18.495 10:56:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:18.495 10:56:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:25:18.495 10:56:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:25:18.495 10:56:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:25:18.495 10:56:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:18.495 10:56:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:25:18.495 00:25:18.495 real 3m24.737s 00:25:18.495 user 3m39.164s 00:25:18.495 sys 0m22.516s 00:25:18.495 10:56:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:18.495 ************************************ 00:25:18.495 10:56:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:18.495 END TEST ftl_dirty_shutdown 00:25:18.495 ************************************ 00:25:18.495 10:56:39 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:25:18.495 10:56:39 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:25:18.495 10:56:39 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:18.495 10:56:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:18.754 ************************************ 00:25:18.754 START TEST ftl_upgrade_shutdown 00:25:18.754 ************************************ 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:25:18.754 * Looking for test storage... 00:25:18.754 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:18.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:18.754 --rc genhtml_branch_coverage=1 00:25:18.754 --rc genhtml_function_coverage=1 00:25:18.754 --rc genhtml_legend=1 00:25:18.754 --rc geninfo_all_blocks=1 00:25:18.754 --rc geninfo_unexecuted_blocks=1 00:25:18.754 00:25:18.754 ' 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:18.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:18.754 --rc genhtml_branch_coverage=1 00:25:18.754 --rc genhtml_function_coverage=1 00:25:18.754 --rc genhtml_legend=1 00:25:18.754 --rc geninfo_all_blocks=1 00:25:18.754 --rc geninfo_unexecuted_blocks=1 00:25:18.754 00:25:18.754 ' 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:18.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:18.754 --rc genhtml_branch_coverage=1 00:25:18.754 --rc genhtml_function_coverage=1 00:25:18.754 --rc genhtml_legend=1 00:25:18.754 --rc geninfo_all_blocks=1 00:25:18.754 --rc geninfo_unexecuted_blocks=1 00:25:18.754 00:25:18.754 ' 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:18.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:18.754 --rc genhtml_branch_coverage=1 00:25:18.754 --rc genhtml_function_coverage=1 00:25:18.754 --rc genhtml_legend=1 00:25:18.754 --rc geninfo_all_blocks=1 00:25:18.754 --rc geninfo_unexecuted_blocks=1 00:25:18.754 00:25:18.754 ' 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92888 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92888 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92888 ']' 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:18.754 10:56:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.755 10:56:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:18.755 10:56:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:18.755 [2024-10-08 10:56:39.313528] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:25:18.755 [2024-10-08 10:56:39.313647] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92888 ] 00:25:19.013 [2024-10-08 10:56:39.441053] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:19.013 [2024-10-08 10:56:39.461996] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.013 [2024-10-08 10:56:39.492731] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.608 10:56:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:19.608 10:56:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:25:19.608 10:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:19.608 10:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:25:19.608 10:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:25:19.608 10:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:19.608 10:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:25:19.608 10:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:19.608 10:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:25:19.608 10:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:19.608 10:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:25:19.608 10:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:19.608 10:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:25:19.608 10:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:19.608 10:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:25:19.608 10:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:19.608 10:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:25:19.608 10:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:25:19.608 10:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:25:19.608 10:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:19.608 10:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:25:19.608 10:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:19.608 10:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:25:19.866 10:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:25:19.866 10:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:19.866 10:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:25:19.866 10:56:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:25:19.866 10:56:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:19.866 10:56:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:25:19.866 10:56:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:25:19.866 10:56:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:25:20.124 10:56:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:20.124 { 00:25:20.124 "name": "basen1", 00:25:20.124 "aliases": [ 00:25:20.124 "96654ff4-dc99-48c8-8a87-02052422ea40" 00:25:20.124 ], 00:25:20.124 "product_name": "NVMe disk", 00:25:20.124 "block_size": 4096, 00:25:20.124 "num_blocks": 1310720, 00:25:20.124 "uuid": "96654ff4-dc99-48c8-8a87-02052422ea40", 00:25:20.124 "numa_id": -1, 00:25:20.124 "assigned_rate_limits": { 00:25:20.124 "rw_ios_per_sec": 0, 00:25:20.124 "rw_mbytes_per_sec": 0, 00:25:20.124 "r_mbytes_per_sec": 0, 00:25:20.124 "w_mbytes_per_sec": 0 00:25:20.124 }, 00:25:20.124 "claimed": true, 00:25:20.124 "claim_type": "read_many_write_one", 00:25:20.124 "zoned": false, 00:25:20.124 "supported_io_types": { 00:25:20.124 "read": true, 00:25:20.124 "write": true, 00:25:20.124 "unmap": true, 00:25:20.124 "flush": true, 00:25:20.124 "reset": true, 00:25:20.124 "nvme_admin": true, 00:25:20.124 "nvme_io": true, 00:25:20.124 "nvme_io_md": false, 00:25:20.124 "write_zeroes": true, 00:25:20.124 "zcopy": false, 00:25:20.124 "get_zone_info": false, 00:25:20.124 "zone_management": false, 00:25:20.124 "zone_append": false, 00:25:20.124 "compare": true, 00:25:20.124 "compare_and_write": false, 00:25:20.124 "abort": true, 00:25:20.124 "seek_hole": false, 00:25:20.124 "seek_data": false, 00:25:20.124 "copy": true, 00:25:20.124 "nvme_iov_md": false 00:25:20.124 }, 00:25:20.124 "driver_specific": { 00:25:20.124 "nvme": [ 00:25:20.124 { 00:25:20.124 "pci_address": "0000:00:11.0", 00:25:20.124 "trid": { 00:25:20.124 "trtype": "PCIe", 00:25:20.124 "traddr": "0000:00:11.0" 00:25:20.124 }, 00:25:20.124 "ctrlr_data": { 00:25:20.124 "cntlid": 0, 00:25:20.124 "vendor_id": "0x1b36", 00:25:20.124 "model_number": "QEMU NVMe Ctrl", 00:25:20.124 "serial_number": "12341", 00:25:20.124 "firmware_revision": "8.0.0", 00:25:20.124 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:20.124 "oacs": { 00:25:20.124 "security": 0, 00:25:20.124 "format": 1, 00:25:20.124 "firmware": 0, 00:25:20.124 "ns_manage": 1 00:25:20.124 }, 00:25:20.124 "multi_ctrlr": false, 00:25:20.124 "ana_reporting": false 00:25:20.124 }, 00:25:20.124 "vs": { 00:25:20.124 "nvme_version": "1.4" 00:25:20.124 }, 00:25:20.124 "ns_data": { 00:25:20.124 "id": 1, 00:25:20.124 "can_share": false 00:25:20.124 } 00:25:20.124 } 00:25:20.124 ], 00:25:20.124 "mp_policy": "active_passive" 00:25:20.124 } 00:25:20.124 } 00:25:20.124 ]' 00:25:20.124 10:56:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:20.124 10:56:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:25:20.124 10:56:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:20.124 10:56:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:25:20.124 10:56:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:25:20.124 10:56:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:25:20.124 10:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:20.124 10:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:25:20.124 10:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:20.124 10:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:20.124 10:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:20.382 10:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=51f41746-3a64-4a6b-9203-14da0e553293 00:25:20.382 10:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:20.382 10:56:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 51f41746-3a64-4a6b-9203-14da0e553293 00:25:20.640 10:56:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:25:20.898 10:56:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=19d0fe98-dc8a-4757-ae67-8d3638dd09fe 00:25:20.898 10:56:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 19d0fe98-dc8a-4757-ae67-8d3638dd09fe 00:25:21.157 10:56:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=42bf0898-8937-443d-8d17-06ef63539141 00:25:21.157 10:56:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 42bf0898-8937-443d-8d17-06ef63539141 ]] 00:25:21.157 10:56:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 42bf0898-8937-443d-8d17-06ef63539141 5120 00:25:21.157 10:56:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:25:21.157 10:56:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:21.157 10:56:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=42bf0898-8937-443d-8d17-06ef63539141 00:25:21.157 10:56:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:25:21.157 10:56:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 42bf0898-8937-443d-8d17-06ef63539141 00:25:21.157 10:56:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=42bf0898-8937-443d-8d17-06ef63539141 00:25:21.157 10:56:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:21.157 10:56:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:25:21.157 10:56:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:25:21.157 10:56:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 42bf0898-8937-443d-8d17-06ef63539141 00:25:21.157 10:56:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:21.157 { 00:25:21.157 "name": "42bf0898-8937-443d-8d17-06ef63539141", 00:25:21.157 "aliases": [ 00:25:21.157 "lvs/basen1p0" 00:25:21.157 ], 00:25:21.157 "product_name": "Logical Volume", 00:25:21.157 "block_size": 4096, 00:25:21.157 "num_blocks": 5242880, 00:25:21.157 "uuid": "42bf0898-8937-443d-8d17-06ef63539141", 00:25:21.157 "assigned_rate_limits": { 00:25:21.157 "rw_ios_per_sec": 0, 00:25:21.157 "rw_mbytes_per_sec": 0, 00:25:21.157 "r_mbytes_per_sec": 0, 00:25:21.157 "w_mbytes_per_sec": 0 00:25:21.157 }, 00:25:21.157 "claimed": false, 00:25:21.157 "zoned": false, 00:25:21.157 "supported_io_types": { 00:25:21.157 "read": true, 00:25:21.157 "write": true, 00:25:21.157 "unmap": true, 00:25:21.157 "flush": false, 00:25:21.157 "reset": true, 00:25:21.157 "nvme_admin": false, 00:25:21.157 "nvme_io": false, 00:25:21.157 "nvme_io_md": false, 00:25:21.157 "write_zeroes": true, 00:25:21.157 "zcopy": false, 00:25:21.157 "get_zone_info": false, 00:25:21.157 "zone_management": false, 00:25:21.157 "zone_append": false, 00:25:21.157 "compare": false, 00:25:21.157 "compare_and_write": false, 00:25:21.157 "abort": false, 00:25:21.157 "seek_hole": true, 00:25:21.157 "seek_data": true, 00:25:21.157 "copy": false, 00:25:21.157 "nvme_iov_md": false 00:25:21.157 }, 00:25:21.157 "driver_specific": { 00:25:21.157 "lvol": { 00:25:21.157 "lvol_store_uuid": "19d0fe98-dc8a-4757-ae67-8d3638dd09fe", 00:25:21.157 "base_bdev": "basen1", 00:25:21.157 "thin_provision": true, 00:25:21.157 "num_allocated_clusters": 0, 00:25:21.157 "snapshot": false, 00:25:21.157 "clone": false, 00:25:21.157 "esnap_clone": false 00:25:21.157 } 00:25:21.157 } 00:25:21.157 } 00:25:21.157 ]' 00:25:21.157 10:56:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:21.415 10:56:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:25:21.415 10:56:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:21.415 10:56:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:25:21.415 10:56:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:25:21.415 10:56:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:25:21.415 10:56:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:25:21.415 10:56:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:21.415 10:56:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:25:21.673 10:56:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:25:21.673 10:56:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:25:21.673 10:56:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:25:21.673 10:56:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:25:21.673 10:56:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:25:21.673 10:56:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 42bf0898-8937-443d-8d17-06ef63539141 -c cachen1p0 --l2p_dram_limit 2 00:25:21.931 [2024-10-08 10:56:42.400926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:21.931 [2024-10-08 10:56:42.400976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:25:21.931 [2024-10-08 10:56:42.400992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:21.931 [2024-10-08 10:56:42.401000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:21.931 [2024-10-08 10:56:42.401052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:21.931 [2024-10-08 10:56:42.401066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:21.931 [2024-10-08 10:56:42.401078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:25:21.931 [2024-10-08 10:56:42.401088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:21.931 [2024-10-08 10:56:42.401108] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:25:21.931 [2024-10-08 10:56:42.401390] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:25:21.931 [2024-10-08 10:56:42.401408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:21.931 [2024-10-08 10:56:42.401419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:21.931 [2024-10-08 10:56:42.401429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.305 ms 00:25:21.931 [2024-10-08 10:56:42.401437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:21.931 [2024-10-08 10:56:42.401469] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID f77c43c4-6b24-4e97-a2fc-6dd276304a9e 00:25:21.931 [2024-10-08 10:56:42.402568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:21.931 [2024-10-08 10:56:42.402599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:25:21.931 [2024-10-08 10:56:42.402612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:25:21.931 [2024-10-08 10:56:42.402623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:21.931 [2024-10-08 10:56:42.407591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:21.931 [2024-10-08 10:56:42.407625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:21.931 [2024-10-08 10:56:42.407635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.899 ms 00:25:21.931 [2024-10-08 10:56:42.407646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:21.931 [2024-10-08 10:56:42.407684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:21.931 [2024-10-08 10:56:42.407694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:21.931 [2024-10-08 10:56:42.407703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:25:21.931 [2024-10-08 10:56:42.407712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:21.931 [2024-10-08 10:56:42.407755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:21.931 [2024-10-08 10:56:42.407767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:25:21.931 [2024-10-08 10:56:42.407778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:25:21.931 [2024-10-08 10:56:42.407787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:21.931 [2024-10-08 10:56:42.407822] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:25:21.931 [2024-10-08 10:56:42.409227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:21.931 [2024-10-08 10:56:42.409253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:21.931 [2024-10-08 10:56:42.409266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.408 ms 00:25:21.931 [2024-10-08 10:56:42.409274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:21.931 [2024-10-08 10:56:42.409299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:21.931 [2024-10-08 10:56:42.409307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:25:21.931 [2024-10-08 10:56:42.409318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:21.931 [2024-10-08 10:56:42.409325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:21.931 [2024-10-08 10:56:42.409343] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:25:21.931 [2024-10-08 10:56:42.409478] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:25:21.931 [2024-10-08 10:56:42.409500] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:25:21.931 [2024-10-08 10:56:42.409511] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:25:21.931 [2024-10-08 10:56:42.409523] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:25:21.931 [2024-10-08 10:56:42.409531] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:25:21.931 [2024-10-08 10:56:42.409544] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:25:21.931 [2024-10-08 10:56:42.409555] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:25:21.931 [2024-10-08 10:56:42.409564] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:25:21.931 [2024-10-08 10:56:42.409573] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:25:21.931 [2024-10-08 10:56:42.409588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:21.931 [2024-10-08 10:56:42.409598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:25:21.931 [2024-10-08 10:56:42.409607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.247 ms 00:25:21.931 [2024-10-08 10:56:42.409614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:21.931 [2024-10-08 10:56:42.409702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:21.931 [2024-10-08 10:56:42.409710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:25:21.931 [2024-10-08 10:56:42.409719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:25:21.931 [2024-10-08 10:56:42.409726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:21.931 [2024-10-08 10:56:42.409848] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:25:21.931 [2024-10-08 10:56:42.409865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:25:21.931 [2024-10-08 10:56:42.409879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:21.931 [2024-10-08 10:56:42.409887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:21.931 [2024-10-08 10:56:42.409899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:25:21.931 [2024-10-08 10:56:42.409908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:25:21.931 [2024-10-08 10:56:42.409917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:25:21.931 [2024-10-08 10:56:42.409925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:25:21.931 [2024-10-08 10:56:42.409934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:25:21.931 [2024-10-08 10:56:42.409941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:21.931 [2024-10-08 10:56:42.409949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:25:21.931 [2024-10-08 10:56:42.409957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:25:21.931 [2024-10-08 10:56:42.409967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:21.931 [2024-10-08 10:56:42.409975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:25:21.931 [2024-10-08 10:56:42.409984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:25:21.931 [2024-10-08 10:56:42.409991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:21.931 [2024-10-08 10:56:42.410000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:25:21.931 [2024-10-08 10:56:42.410007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:25:21.931 [2024-10-08 10:56:42.410017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:21.931 [2024-10-08 10:56:42.410025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:25:21.931 [2024-10-08 10:56:42.410034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:25:21.931 [2024-10-08 10:56:42.410042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:21.931 [2024-10-08 10:56:42.410051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:25:21.931 [2024-10-08 10:56:42.410059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:25:21.931 [2024-10-08 10:56:42.410068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:21.931 [2024-10-08 10:56:42.410075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:25:21.931 [2024-10-08 10:56:42.410084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:25:21.931 [2024-10-08 10:56:42.410092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:21.931 [2024-10-08 10:56:42.410103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:25:21.931 [2024-10-08 10:56:42.410110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:25:21.931 [2024-10-08 10:56:42.410120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:21.931 [2024-10-08 10:56:42.410128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:25:21.931 [2024-10-08 10:56:42.410138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:25:21.931 [2024-10-08 10:56:42.410145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:21.931 [2024-10-08 10:56:42.410154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:25:21.931 [2024-10-08 10:56:42.410161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:25:21.931 [2024-10-08 10:56:42.410170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:21.931 [2024-10-08 10:56:42.410177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:25:21.931 [2024-10-08 10:56:42.410186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:25:21.931 [2024-10-08 10:56:42.410194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:21.931 [2024-10-08 10:56:42.410202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:25:21.932 [2024-10-08 10:56:42.410210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:25:21.932 [2024-10-08 10:56:42.410219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:21.932 [2024-10-08 10:56:42.410226] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:25:21.932 [2024-10-08 10:56:42.410237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:25:21.932 [2024-10-08 10:56:42.410245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:21.932 [2024-10-08 10:56:42.410254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:21.932 [2024-10-08 10:56:42.410262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:25:21.932 [2024-10-08 10:56:42.410271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:25:21.932 [2024-10-08 10:56:42.410279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:25:21.932 [2024-10-08 10:56:42.410287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:25:21.932 [2024-10-08 10:56:42.410293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:25:21.932 [2024-10-08 10:56:42.410302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:25:21.932 [2024-10-08 10:56:42.410312] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:25:21.932 [2024-10-08 10:56:42.410322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:21.932 [2024-10-08 10:56:42.410330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:25:21.932 [2024-10-08 10:56:42.410340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:25:21.932 [2024-10-08 10:56:42.410348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:25:21.932 [2024-10-08 10:56:42.410356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:25:21.932 [2024-10-08 10:56:42.410363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:25:21.932 [2024-10-08 10:56:42.410373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:25:21.932 [2024-10-08 10:56:42.410380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:25:21.932 [2024-10-08 10:56:42.410388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:25:21.932 [2024-10-08 10:56:42.410395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:25:21.932 [2024-10-08 10:56:42.410404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:25:21.932 [2024-10-08 10:56:42.410411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:25:21.932 [2024-10-08 10:56:42.410419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:25:21.932 [2024-10-08 10:56:42.410426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:25:21.932 [2024-10-08 10:56:42.410435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:25:21.932 [2024-10-08 10:56:42.410442] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:25:21.932 [2024-10-08 10:56:42.410454] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:21.932 [2024-10-08 10:56:42.410462] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:21.932 [2024-10-08 10:56:42.410471] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:25:21.932 [2024-10-08 10:56:42.410478] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:25:21.932 [2024-10-08 10:56:42.410486] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:25:21.932 [2024-10-08 10:56:42.410493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:21.932 [2024-10-08 10:56:42.410505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:25:21.932 [2024-10-08 10:56:42.410512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.740 ms 00:25:21.932 [2024-10-08 10:56:42.410520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:21.932 [2024-10-08 10:56:42.410566] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:25:21.932 [2024-10-08 10:56:42.410578] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:25:24.498 [2024-10-08 10:56:44.612526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.498 [2024-10-08 10:56:44.612590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:25:24.498 [2024-10-08 10:56:44.612606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2201.950 ms 00:25:24.498 [2024-10-08 10:56:44.612630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.498 [2024-10-08 10:56:44.620316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.498 [2024-10-08 10:56:44.620360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:24.498 [2024-10-08 10:56:44.620376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.610 ms 00:25:24.498 [2024-10-08 10:56:44.620387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.498 [2024-10-08 10:56:44.620434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.498 [2024-10-08 10:56:44.620446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:25:24.498 [2024-10-08 10:56:44.620457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:25:24.498 [2024-10-08 10:56:44.620465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.498 [2024-10-08 10:56:44.628093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.498 [2024-10-08 10:56:44.628132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:24.498 [2024-10-08 10:56:44.628142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.598 ms 00:25:24.498 [2024-10-08 10:56:44.628152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.498 [2024-10-08 10:56:44.628179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.498 [2024-10-08 10:56:44.628191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:24.498 [2024-10-08 10:56:44.628204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:24.498 [2024-10-08 10:56:44.628216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.498 [2024-10-08 10:56:44.628539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.498 [2024-10-08 10:56:44.628567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:24.498 [2024-10-08 10:56:44.628576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.278 ms 00:25:24.498 [2024-10-08 10:56:44.628586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.498 [2024-10-08 10:56:44.628625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.498 [2024-10-08 10:56:44.628635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:24.498 [2024-10-08 10:56:44.628645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:25:24.498 [2024-10-08 10:56:44.628656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.498 [2024-10-08 10:56:44.644544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.498 [2024-10-08 10:56:44.644602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:24.498 [2024-10-08 10:56:44.644619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.869 ms 00:25:24.498 [2024-10-08 10:56:44.644634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.498 [2024-10-08 10:56:44.654365] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:25:24.498 [2024-10-08 10:56:44.655163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.498 [2024-10-08 10:56:44.655190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:25:24.498 [2024-10-08 10:56:44.655202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.414 ms 00:25:24.498 [2024-10-08 10:56:44.655209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.498 [2024-10-08 10:56:44.667636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.498 [2024-10-08 10:56:44.667738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:25:24.498 [2024-10-08 10:56:44.667784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.388 ms 00:25:24.498 [2024-10-08 10:56:44.667844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.498 [2024-10-08 10:56:44.668064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.498 [2024-10-08 10:56:44.668122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:25:24.498 [2024-10-08 10:56:44.668154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.118 ms 00:25:24.498 [2024-10-08 10:56:44.668188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.498 [2024-10-08 10:56:44.674207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.498 [2024-10-08 10:56:44.674287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:25:24.498 [2024-10-08 10:56:44.674321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.923 ms 00:25:24.498 [2024-10-08 10:56:44.674331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.498 [2024-10-08 10:56:44.676650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.499 [2024-10-08 10:56:44.676682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:25:24.499 [2024-10-08 10:56:44.676693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.236 ms 00:25:24.499 [2024-10-08 10:56:44.676700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.499 [2024-10-08 10:56:44.676998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.499 [2024-10-08 10:56:44.677018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:25:24.499 [2024-10-08 10:56:44.677030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.266 ms 00:25:24.499 [2024-10-08 10:56:44.677037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.499 [2024-10-08 10:56:44.701125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.499 [2024-10-08 10:56:44.701158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:25:24.499 [2024-10-08 10:56:44.701169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.053 ms 00:25:24.499 [2024-10-08 10:56:44.701177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.499 [2024-10-08 10:56:44.704569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.499 [2024-10-08 10:56:44.704603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:25:24.499 [2024-10-08 10:56:44.704615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.338 ms 00:25:24.499 [2024-10-08 10:56:44.704624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.499 [2024-10-08 10:56:44.707224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.499 [2024-10-08 10:56:44.707254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:25:24.499 [2024-10-08 10:56:44.707264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.565 ms 00:25:24.499 [2024-10-08 10:56:44.707271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.499 [2024-10-08 10:56:44.710131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.499 [2024-10-08 10:56:44.710163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:25:24.499 [2024-10-08 10:56:44.710176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.825 ms 00:25:24.499 [2024-10-08 10:56:44.710185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.499 [2024-10-08 10:56:44.710225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.499 [2024-10-08 10:56:44.710233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:25:24.499 [2024-10-08 10:56:44.710243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:24.499 [2024-10-08 10:56:44.710250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.499 [2024-10-08 10:56:44.710312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.499 [2024-10-08 10:56:44.710321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:25:24.499 [2024-10-08 10:56:44.710330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:25:24.499 [2024-10-08 10:56:44.710337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.499 [2024-10-08 10:56:44.711184] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2309.860 ms, result 0 00:25:24.499 { 00:25:24.499 "name": "ftl", 00:25:24.499 "uuid": "f77c43c4-6b24-4e97-a2fc-6dd276304a9e" 00:25:24.499 } 00:25:24.499 10:56:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:25:24.499 [2024-10-08 10:56:44.915724] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:24.499 10:56:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:25:24.756 10:56:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:25:24.757 [2024-10-08 10:56:45.312140] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:25:24.757 10:56:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:25:25.014 [2024-10-08 10:56:45.516511] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:25.014 10:56:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:25:25.579 10:56:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:25:25.579 10:56:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:25:25.579 10:56:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:25:25.579 10:56:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:25:25.579 10:56:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:25:25.579 10:56:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:25:25.579 10:56:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:25:25.579 10:56:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:25:25.579 10:56:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:25:25.579 Fill FTL, iteration 1 00:25:25.579 10:56:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:25.579 10:56:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:25:25.579 10:56:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:25:25.579 10:56:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:25.579 10:56:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:25.579 10:56:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:25.579 10:56:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:25:25.579 10:56:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=92993 00:25:25.579 10:56:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:25:25.579 10:56:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 92993 /var/tmp/spdk.tgt.sock 00:25:25.579 10:56:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92993 ']' 00:25:25.579 10:56:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:25:25.579 10:56:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:25.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:25:25.579 10:56:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:25:25.579 10:56:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:25.579 10:56:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:25.579 10:56:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:25:25.579 [2024-10-08 10:56:45.940926] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:25:25.579 [2024-10-08 10:56:45.941062] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92993 ] 00:25:25.579 [2024-10-08 10:56:46.072171] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:25.579 [2024-10-08 10:56:46.091182] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.579 [2024-10-08 10:56:46.124165] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:25:26.509 10:56:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:26.509 10:56:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:25:26.509 10:56:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:25:26.509 ftln1 00:25:26.509 10:56:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:25:26.509 10:56:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:25:26.767 10:56:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:25:26.767 10:56:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 92993 00:25:26.767 10:56:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 92993 ']' 00:25:26.767 10:56:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 92993 00:25:26.767 10:56:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:25:26.767 10:56:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:26.767 10:56:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92993 00:25:26.767 10:56:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:26.767 10:56:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:26.767 killing process with pid 92993 00:25:26.767 10:56:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92993' 00:25:26.767 10:56:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 92993 00:25:26.767 10:56:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 92993 00:25:27.025 10:56:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:25:27.025 10:56:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:25:27.282 [2024-10-08 10:56:47.611851] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:25:27.282 [2024-10-08 10:56:47.611963] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93024 ] 00:25:27.282 [2024-10-08 10:56:47.739724] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:27.282 [2024-10-08 10:56:47.761670] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.282 [2024-10-08 10:56:47.792361] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:25:28.666  [2024-10-08T10:56:50.174Z] Copying: 216/1024 [MB] (216 MBps) [2024-10-08T10:56:51.105Z] Copying: 443/1024 [MB] (227 MBps) [2024-10-08T10:56:52.036Z] Copying: 690/1024 [MB] (247 MBps) [2024-10-08T10:56:52.293Z] Copying: 957/1024 [MB] (267 MBps) [2024-10-08T10:56:52.550Z] Copying: 1024/1024 [MB] (average 240 MBps) 00:25:31.973 00:25:31.973 Calculate MD5 checksum, iteration 1 00:25:31.973 10:56:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:25:31.973 10:56:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:25:31.974 10:56:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:31.974 10:56:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:31.974 10:56:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:31.974 10:56:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:31.974 10:56:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:31.974 10:56:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:31.974 [2024-10-08 10:56:52.436063] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:25:31.974 [2024-10-08 10:56:52.436152] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93077 ] 00:25:32.231 [2024-10-08 10:56:52.558363] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:32.231 [2024-10-08 10:56:52.576825] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.231 [2024-10-08 10:56:52.605653] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:25:33.602  [2024-10-08T10:56:54.438Z] Copying: 703/1024 [MB] (703 MBps) [2024-10-08T10:56:54.438Z] Copying: 1024/1024 [MB] (average 691 MBps) 00:25:33.861 00:25:33.861 10:56:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:25:33.861 10:56:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:35.761 10:56:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:25:35.761 10:56:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=0ccd91c55da9d03013419f515f44f802 00:25:35.761 10:56:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:25:35.761 10:56:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:35.761 10:56:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:25:35.761 Fill FTL, iteration 2 00:25:35.761 10:56:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:25:35.761 10:56:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:35.761 10:56:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:35.761 10:56:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:35.761 10:56:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:35.761 10:56:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:25:35.761 [2024-10-08 10:56:56.098843] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:25:35.761 [2024-10-08 10:56:56.099046] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93123 ] 00:25:35.761 [2024-10-08 10:56:56.221321] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:35.761 [2024-10-08 10:56:56.242175] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.761 [2024-10-08 10:56:56.274204] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:25:37.133  [2024-10-08T10:56:58.642Z] Copying: 204/1024 [MB] (204 MBps) [2024-10-08T10:56:59.574Z] Copying: 429/1024 [MB] (225 MBps) [2024-10-08T10:57:00.505Z] Copying: 689/1024 [MB] (260 MBps) [2024-10-08T10:57:00.762Z] Copying: 951/1024 [MB] (262 MBps) [2024-10-08T10:57:01.019Z] Copying: 1024/1024 [MB] (average 239 MBps) 00:25:40.442 00:25:40.442 Calculate MD5 checksum, iteration 2 00:25:40.442 10:57:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:25:40.442 10:57:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:25:40.442 10:57:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:40.442 10:57:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:40.442 10:57:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:40.442 10:57:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:40.443 10:57:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:40.443 10:57:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:40.443 [2024-10-08 10:57:00.947158] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:25:40.443 [2024-10-08 10:57:00.947277] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93177 ] 00:25:40.700 [2024-10-08 10:57:01.074714] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:40.700 [2024-10-08 10:57:01.092747] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.700 [2024-10-08 10:57:01.121318] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:25:42.069  [2024-10-08T10:57:02.902Z] Copying: 682/1024 [MB] (682 MBps) [2024-10-08T10:57:07.099Z] Copying: 1024/1024 [MB] (average 686 MBps) 00:25:46.522 00:25:46.522 10:57:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:25:46.522 10:57:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:49.048 10:57:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:25:49.048 10:57:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=1902c472766ccd4efcca25fb294cd65c 00:25:49.048 10:57:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:25:49.048 10:57:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:49.048 10:57:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:25:49.048 [2024-10-08 10:57:09.252759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.048 [2024-10-08 10:57:09.252813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:49.048 [2024-10-08 10:57:09.252825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:25:49.048 [2024-10-08 10:57:09.252832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.048 [2024-10-08 10:57:09.252852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.048 [2024-10-08 10:57:09.252859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:49.048 [2024-10-08 10:57:09.252868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:49.048 [2024-10-08 10:57:09.252874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.048 [2024-10-08 10:57:09.252890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.048 [2024-10-08 10:57:09.252896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:49.048 [2024-10-08 10:57:09.252903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:49.048 [2024-10-08 10:57:09.252908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.048 [2024-10-08 10:57:09.252960] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.188 ms, result 0 00:25:49.048 true 00:25:49.048 10:57:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:49.048 { 00:25:49.048 "name": "ftl", 00:25:49.048 "properties": [ 00:25:49.048 { 00:25:49.048 "name": "superblock_version", 00:25:49.048 "value": 5, 00:25:49.048 "read-only": true 00:25:49.048 }, 00:25:49.048 { 00:25:49.048 "name": "base_device", 00:25:49.048 "bands": [ 00:25:49.048 { 00:25:49.048 "id": 0, 00:25:49.048 "state": "FREE", 00:25:49.048 "validity": 0.0 00:25:49.048 }, 00:25:49.048 { 00:25:49.048 "id": 1, 00:25:49.048 "state": "FREE", 00:25:49.048 "validity": 0.0 00:25:49.048 }, 00:25:49.048 { 00:25:49.048 "id": 2, 00:25:49.048 "state": "FREE", 00:25:49.048 "validity": 0.0 00:25:49.048 }, 00:25:49.048 { 00:25:49.048 "id": 3, 00:25:49.048 "state": "FREE", 00:25:49.048 "validity": 0.0 00:25:49.048 }, 00:25:49.048 { 00:25:49.048 "id": 4, 00:25:49.048 "state": "FREE", 00:25:49.048 "validity": 0.0 00:25:49.048 }, 00:25:49.048 { 00:25:49.048 "id": 5, 00:25:49.048 "state": "FREE", 00:25:49.048 "validity": 0.0 00:25:49.048 }, 00:25:49.048 { 00:25:49.048 "id": 6, 00:25:49.048 "state": "FREE", 00:25:49.048 "validity": 0.0 00:25:49.048 }, 00:25:49.048 { 00:25:49.048 "id": 7, 00:25:49.048 "state": "FREE", 00:25:49.048 "validity": 0.0 00:25:49.048 }, 00:25:49.048 { 00:25:49.048 "id": 8, 00:25:49.048 "state": "FREE", 00:25:49.048 "validity": 0.0 00:25:49.048 }, 00:25:49.048 { 00:25:49.048 "id": 9, 00:25:49.048 "state": "FREE", 00:25:49.048 "validity": 0.0 00:25:49.048 }, 00:25:49.048 { 00:25:49.048 "id": 10, 00:25:49.048 "state": "FREE", 00:25:49.048 "validity": 0.0 00:25:49.048 }, 00:25:49.048 { 00:25:49.048 "id": 11, 00:25:49.048 "state": "FREE", 00:25:49.048 "validity": 0.0 00:25:49.048 }, 00:25:49.048 { 00:25:49.048 "id": 12, 00:25:49.048 "state": "FREE", 00:25:49.048 "validity": 0.0 00:25:49.048 }, 00:25:49.048 { 00:25:49.048 "id": 13, 00:25:49.048 "state": "FREE", 00:25:49.048 "validity": 0.0 00:25:49.048 }, 00:25:49.048 { 00:25:49.048 "id": 14, 00:25:49.048 "state": "FREE", 00:25:49.048 "validity": 0.0 00:25:49.048 }, 00:25:49.048 { 00:25:49.048 "id": 15, 00:25:49.048 "state": "FREE", 00:25:49.048 "validity": 0.0 00:25:49.048 }, 00:25:49.048 { 00:25:49.048 "id": 16, 00:25:49.048 "state": "FREE", 00:25:49.048 "validity": 0.0 00:25:49.048 }, 00:25:49.048 { 00:25:49.048 "id": 17, 00:25:49.048 "state": "FREE", 00:25:49.048 "validity": 0.0 00:25:49.048 } 00:25:49.048 ], 00:25:49.048 "read-only": true 00:25:49.048 }, 00:25:49.048 { 00:25:49.048 "name": "cache_device", 00:25:49.048 "type": "bdev", 00:25:49.048 "chunks": [ 00:25:49.048 { 00:25:49.048 "id": 0, 00:25:49.048 "state": "INACTIVE", 00:25:49.048 "utilization": 0.0 00:25:49.049 }, 00:25:49.049 { 00:25:49.049 "id": 1, 00:25:49.049 "state": "CLOSED", 00:25:49.049 "utilization": 1.0 00:25:49.049 }, 00:25:49.049 { 00:25:49.049 "id": 2, 00:25:49.049 "state": "CLOSED", 00:25:49.049 "utilization": 1.0 00:25:49.049 }, 00:25:49.049 { 00:25:49.049 "id": 3, 00:25:49.049 "state": "OPEN", 00:25:49.049 "utilization": 0.001953125 00:25:49.049 }, 00:25:49.049 { 00:25:49.049 "id": 4, 00:25:49.049 "state": "OPEN", 00:25:49.049 "utilization": 0.0 00:25:49.049 } 00:25:49.049 ], 00:25:49.049 "read-only": true 00:25:49.049 }, 00:25:49.049 { 00:25:49.049 "name": "verbose_mode", 00:25:49.049 "value": true, 00:25:49.049 "unit": "", 00:25:49.049 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:25:49.049 }, 00:25:49.049 { 00:25:49.049 "name": "prep_upgrade_on_shutdown", 00:25:49.049 "value": false, 00:25:49.049 "unit": "", 00:25:49.049 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:25:49.049 } 00:25:49.049 ] 00:25:49.049 } 00:25:49.049 10:57:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:25:49.307 [2024-10-08 10:57:09.653073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.307 [2024-10-08 10:57:09.653116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:49.307 [2024-10-08 10:57:09.653127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:49.307 [2024-10-08 10:57:09.653134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.307 [2024-10-08 10:57:09.653152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.307 [2024-10-08 10:57:09.653159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:49.307 [2024-10-08 10:57:09.653165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:49.307 [2024-10-08 10:57:09.653171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.307 [2024-10-08 10:57:09.653186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.307 [2024-10-08 10:57:09.653192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:49.307 [2024-10-08 10:57:09.653199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:49.307 [2024-10-08 10:57:09.653204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.307 [2024-10-08 10:57:09.653249] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.168 ms, result 0 00:25:49.307 true 00:25:49.307 10:57:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:25:49.307 10:57:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:49.307 10:57:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:25:49.564 10:57:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:25:49.564 10:57:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:25:49.564 10:57:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:25:49.564 [2024-10-08 10:57:10.037425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.564 [2024-10-08 10:57:10.037461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:49.564 [2024-10-08 10:57:10.037471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:49.564 [2024-10-08 10:57:10.037478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.564 [2024-10-08 10:57:10.037495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.564 [2024-10-08 10:57:10.037502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:49.564 [2024-10-08 10:57:10.037508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:49.564 [2024-10-08 10:57:10.037514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.564 [2024-10-08 10:57:10.037529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.564 [2024-10-08 10:57:10.037535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:49.564 [2024-10-08 10:57:10.037542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:49.564 [2024-10-08 10:57:10.037547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.564 [2024-10-08 10:57:10.037593] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.160 ms, result 0 00:25:49.564 true 00:25:49.564 10:57:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:49.821 { 00:25:49.821 "name": "ftl", 00:25:49.822 "properties": [ 00:25:49.822 { 00:25:49.822 "name": "superblock_version", 00:25:49.822 "value": 5, 00:25:49.822 "read-only": true 00:25:49.822 }, 00:25:49.822 { 00:25:49.822 "name": "base_device", 00:25:49.822 "bands": [ 00:25:49.822 { 00:25:49.822 "id": 0, 00:25:49.822 "state": "FREE", 00:25:49.822 "validity": 0.0 00:25:49.822 }, 00:25:49.822 { 00:25:49.822 "id": 1, 00:25:49.822 "state": "FREE", 00:25:49.822 "validity": 0.0 00:25:49.822 }, 00:25:49.822 { 00:25:49.822 "id": 2, 00:25:49.822 "state": "FREE", 00:25:49.822 "validity": 0.0 00:25:49.822 }, 00:25:49.822 { 00:25:49.822 "id": 3, 00:25:49.822 "state": "FREE", 00:25:49.822 "validity": 0.0 00:25:49.822 }, 00:25:49.822 { 00:25:49.822 "id": 4, 00:25:49.822 "state": "FREE", 00:25:49.822 "validity": 0.0 00:25:49.822 }, 00:25:49.822 { 00:25:49.822 "id": 5, 00:25:49.822 "state": "FREE", 00:25:49.822 "validity": 0.0 00:25:49.822 }, 00:25:49.822 { 00:25:49.822 "id": 6, 00:25:49.822 "state": "FREE", 00:25:49.822 "validity": 0.0 00:25:49.822 }, 00:25:49.822 { 00:25:49.822 "id": 7, 00:25:49.822 "state": "FREE", 00:25:49.822 "validity": 0.0 00:25:49.822 }, 00:25:49.822 { 00:25:49.822 "id": 8, 00:25:49.822 "state": "FREE", 00:25:49.822 "validity": 0.0 00:25:49.822 }, 00:25:49.822 { 00:25:49.822 "id": 9, 00:25:49.822 "state": "FREE", 00:25:49.822 "validity": 0.0 00:25:49.822 }, 00:25:49.822 { 00:25:49.822 "id": 10, 00:25:49.822 "state": "FREE", 00:25:49.822 "validity": 0.0 00:25:49.822 }, 00:25:49.822 { 00:25:49.822 "id": 11, 00:25:49.822 "state": "FREE", 00:25:49.822 "validity": 0.0 00:25:49.822 }, 00:25:49.822 { 00:25:49.822 "id": 12, 00:25:49.822 "state": "FREE", 00:25:49.822 "validity": 0.0 00:25:49.822 }, 00:25:49.822 { 00:25:49.822 "id": 13, 00:25:49.822 "state": "FREE", 00:25:49.822 "validity": 0.0 00:25:49.822 }, 00:25:49.822 { 00:25:49.822 "id": 14, 00:25:49.822 "state": "FREE", 00:25:49.822 "validity": 0.0 00:25:49.822 }, 00:25:49.822 { 00:25:49.822 "id": 15, 00:25:49.822 "state": "FREE", 00:25:49.822 "validity": 0.0 00:25:49.822 }, 00:25:49.822 { 00:25:49.822 "id": 16, 00:25:49.822 "state": "FREE", 00:25:49.822 "validity": 0.0 00:25:49.822 }, 00:25:49.822 { 00:25:49.822 "id": 17, 00:25:49.822 "state": "FREE", 00:25:49.822 "validity": 0.0 00:25:49.822 } 00:25:49.822 ], 00:25:49.822 "read-only": true 00:25:49.822 }, 00:25:49.822 { 00:25:49.822 "name": "cache_device", 00:25:49.822 "type": "bdev", 00:25:49.822 "chunks": [ 00:25:49.822 { 00:25:49.822 "id": 0, 00:25:49.822 "state": "INACTIVE", 00:25:49.822 "utilization": 0.0 00:25:49.822 }, 00:25:49.822 { 00:25:49.822 "id": 1, 00:25:49.822 "state": "CLOSED", 00:25:49.822 "utilization": 1.0 00:25:49.822 }, 00:25:49.822 { 00:25:49.822 "id": 2, 00:25:49.822 "state": "CLOSED", 00:25:49.822 "utilization": 1.0 00:25:49.822 }, 00:25:49.822 { 00:25:49.822 "id": 3, 00:25:49.822 "state": "OPEN", 00:25:49.822 "utilization": 0.001953125 00:25:49.822 }, 00:25:49.822 { 00:25:49.822 "id": 4, 00:25:49.822 "state": "OPEN", 00:25:49.822 "utilization": 0.0 00:25:49.822 } 00:25:49.822 ], 00:25:49.822 "read-only": true 00:25:49.822 }, 00:25:49.822 { 00:25:49.822 "name": "verbose_mode", 00:25:49.822 "value": true, 00:25:49.822 "unit": "", 00:25:49.822 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:25:49.822 }, 00:25:49.822 { 00:25:49.822 "name": "prep_upgrade_on_shutdown", 00:25:49.822 "value": true, 00:25:49.822 "unit": "", 00:25:49.822 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:25:49.822 } 00:25:49.822 ] 00:25:49.822 } 00:25:49.822 10:57:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:25:49.822 10:57:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 92888 ]] 00:25:49.822 10:57:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 92888 00:25:49.822 10:57:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 92888 ']' 00:25:49.822 10:57:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 92888 00:25:49.822 10:57:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:25:49.822 10:57:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:49.822 10:57:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92888 00:25:49.822 killing process with pid 92888 00:25:49.822 10:57:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:49.822 10:57:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:49.822 10:57:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92888' 00:25:49.822 10:57:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 92888 00:25:49.822 10:57:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 92888 00:25:49.822 [2024-10-08 10:57:10.358908] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:25:49.822 [2024-10-08 10:57:10.362130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.822 [2024-10-08 10:57:10.362162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:25:49.822 [2024-10-08 10:57:10.362172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:49.822 [2024-10-08 10:57:10.362182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:49.822 [2024-10-08 10:57:10.362199] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:25:49.822 [2024-10-08 10:57:10.362589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:49.822 [2024-10-08 10:57:10.362612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:25:49.822 [2024-10-08 10:57:10.362619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.379 ms 00:25:49.822 [2024-10-08 10:57:10.362629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:57.930 [2024-10-08 10:57:17.838257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:57.930 [2024-10-08 10:57:17.838300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:25:57.930 [2024-10-08 10:57:17.838316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7475.577 ms 00:25:57.930 [2024-10-08 10:57:17.838324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:57.930 [2024-10-08 10:57:17.839533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:57.930 [2024-10-08 10:57:17.839555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:25:57.930 [2024-10-08 10:57:17.839562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.196 ms 00:25:57.930 [2024-10-08 10:57:17.839568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:57.930 [2024-10-08 10:57:17.840459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:57.930 [2024-10-08 10:57:17.840476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:25:57.930 [2024-10-08 10:57:17.840484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.871 ms 00:25:57.930 [2024-10-08 10:57:17.840499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:57.930 [2024-10-08 10:57:17.841764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:57.930 [2024-10-08 10:57:17.841792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:25:57.930 [2024-10-08 10:57:17.841820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.238 ms 00:25:57.930 [2024-10-08 10:57:17.841826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:57.930 [2024-10-08 10:57:17.843851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:57.930 [2024-10-08 10:57:17.843879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:25:57.930 [2024-10-08 10:57:17.843886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.000 ms 00:25:57.930 [2024-10-08 10:57:17.843893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:57.930 [2024-10-08 10:57:17.843949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:57.930 [2024-10-08 10:57:17.843962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:25:57.930 [2024-10-08 10:57:17.843969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:25:57.930 [2024-10-08 10:57:17.843975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:57.930 [2024-10-08 10:57:17.845052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:57.930 [2024-10-08 10:57:17.845078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:25:57.930 [2024-10-08 10:57:17.845085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.065 ms 00:25:57.930 [2024-10-08 10:57:17.845091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:57.930 [2024-10-08 10:57:17.845872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:57.930 [2024-10-08 10:57:17.845894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:25:57.930 [2024-10-08 10:57:17.845901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.757 ms 00:25:57.930 [2024-10-08 10:57:17.845906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:57.930 [2024-10-08 10:57:17.846826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:57.930 [2024-10-08 10:57:17.846857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:25:57.930 [2024-10-08 10:57:17.846863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.896 ms 00:25:57.930 [2024-10-08 10:57:17.846869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:57.930 [2024-10-08 10:57:17.847608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:57.930 [2024-10-08 10:57:17.847713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:25:57.930 [2024-10-08 10:57:17.847724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.695 ms 00:25:57.930 [2024-10-08 10:57:17.847730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:57.930 [2024-10-08 10:57:17.847752] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:25:57.930 [2024-10-08 10:57:17.847762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:57.930 [2024-10-08 10:57:17.847771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:25:57.930 [2024-10-08 10:57:17.847778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:25:57.930 [2024-10-08 10:57:17.847784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:57.930 [2024-10-08 10:57:17.847790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:57.930 [2024-10-08 10:57:17.847809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:57.930 [2024-10-08 10:57:17.847816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:57.930 [2024-10-08 10:57:17.847822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:57.930 [2024-10-08 10:57:17.847828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:57.930 [2024-10-08 10:57:17.847834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:57.930 [2024-10-08 10:57:17.847840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:57.930 [2024-10-08 10:57:17.847846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:57.930 [2024-10-08 10:57:17.847852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:57.930 [2024-10-08 10:57:17.847858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:57.930 [2024-10-08 10:57:17.847864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:57.930 [2024-10-08 10:57:17.847870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:57.930 [2024-10-08 10:57:17.847876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:57.930 [2024-10-08 10:57:17.847883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:57.930 [2024-10-08 10:57:17.847890] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:25:57.930 [2024-10-08 10:57:17.847896] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: f77c43c4-6b24-4e97-a2fc-6dd276304a9e 00:25:57.930 [2024-10-08 10:57:17.847902] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:25:57.930 [2024-10-08 10:57:17.847908] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:25:57.930 [2024-10-08 10:57:17.847913] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:25:57.930 [2024-10-08 10:57:17.847919] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:25:57.930 [2024-10-08 10:57:17.847929] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:25:57.930 [2024-10-08 10:57:17.847939] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:25:57.930 [2024-10-08 10:57:17.847945] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:25:57.930 [2024-10-08 10:57:17.847950] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:25:57.930 [2024-10-08 10:57:17.847955] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:25:57.930 [2024-10-08 10:57:17.847961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:57.930 [2024-10-08 10:57:17.847969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:25:57.930 [2024-10-08 10:57:17.847980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.210 ms 00:25:57.930 [2024-10-08 10:57:17.847986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:57.930 [2024-10-08 10:57:17.849193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:57.930 [2024-10-08 10:57:17.849210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:25:57.930 [2024-10-08 10:57:17.849222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.194 ms 00:25:57.930 [2024-10-08 10:57:17.849228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:57.930 [2024-10-08 10:57:17.849294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:57.930 [2024-10-08 10:57:17.849300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:25:57.930 [2024-10-08 10:57:17.849306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:25:57.930 [2024-10-08 10:57:17.849312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:57.930 [2024-10-08 10:57:17.853847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:57.930 [2024-10-08 10:57:17.854034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:57.930 [2024-10-08 10:57:17.854047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:57.930 [2024-10-08 10:57:17.854053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:57.930 [2024-10-08 10:57:17.854077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:57.930 [2024-10-08 10:57:17.854084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:57.930 [2024-10-08 10:57:17.854090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:57.930 [2024-10-08 10:57:17.854096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:57.930 [2024-10-08 10:57:17.854157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:57.930 [2024-10-08 10:57:17.854166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:57.930 [2024-10-08 10:57:17.854172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:57.930 [2024-10-08 10:57:17.854180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:57.930 [2024-10-08 10:57:17.854193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:57.930 [2024-10-08 10:57:17.854202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:57.930 [2024-10-08 10:57:17.854209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:57.930 [2024-10-08 10:57:17.854214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:57.930 [2024-10-08 10:57:17.861919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:57.930 [2024-10-08 10:57:17.861950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:57.930 [2024-10-08 10:57:17.861964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:57.930 [2024-10-08 10:57:17.861970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:57.930 [2024-10-08 10:57:17.868040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:57.930 [2024-10-08 10:57:17.868074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:57.930 [2024-10-08 10:57:17.868084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:57.930 [2024-10-08 10:57:17.868091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:57.930 [2024-10-08 10:57:17.868126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:57.931 [2024-10-08 10:57:17.868134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:57.931 [2024-10-08 10:57:17.868140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:57.931 [2024-10-08 10:57:17.868147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:57.931 [2024-10-08 10:57:17.868190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:57.931 [2024-10-08 10:57:17.868198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:57.931 [2024-10-08 10:57:17.868204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:57.931 [2024-10-08 10:57:17.868209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:57.931 [2024-10-08 10:57:17.868262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:57.931 [2024-10-08 10:57:17.868270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:57.931 [2024-10-08 10:57:17.868277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:57.931 [2024-10-08 10:57:17.868283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:57.931 [2024-10-08 10:57:17.868310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:57.931 [2024-10-08 10:57:17.868317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:25:57.931 [2024-10-08 10:57:17.868324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:57.931 [2024-10-08 10:57:17.868330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:57.931 [2024-10-08 10:57:17.868359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:57.931 [2024-10-08 10:57:17.868366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:57.931 [2024-10-08 10:57:17.868372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:57.931 [2024-10-08 10:57:17.868378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:57.931 [2024-10-08 10:57:17.868415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:57.931 [2024-10-08 10:57:17.868422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:57.931 [2024-10-08 10:57:17.868429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:57.931 [2024-10-08 10:57:17.868437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:57.931 [2024-10-08 10:57:17.868531] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7506.351 ms, result 0 00:25:59.898 10:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:25:59.899 10:57:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:25:59.899 10:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:25:59.899 10:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:25:59.899 10:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:59.899 10:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=93381 00:25:59.899 10:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:25:59.899 10:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 93381 00:25:59.899 10:57:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 93381 ']' 00:25:59.899 10:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:59.899 10:57:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:59.899 10:57:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:59.899 10:57:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:59.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:59.899 10:57:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:59.899 10:57:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:59.899 [2024-10-08 10:57:20.100966] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:25:59.899 [2024-10-08 10:57:20.101230] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93381 ] 00:25:59.899 [2024-10-08 10:57:20.229635] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:59.899 [2024-10-08 10:57:20.245508] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.899 [2024-10-08 10:57:20.274084] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.157 [2024-10-08 10:57:20.521767] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:00.157 [2024-10-08 10:57:20.521992] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:00.157 [2024-10-08 10:57:20.659685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.157 [2024-10-08 10:57:20.659844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:00.157 [2024-10-08 10:57:20.659971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:00.157 [2024-10-08 10:57:20.659998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.157 [2024-10-08 10:57:20.660063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.157 [2024-10-08 10:57:20.660126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:00.157 [2024-10-08 10:57:20.660145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:26:00.157 [2024-10-08 10:57:20.660159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.157 [2024-10-08 10:57:20.660221] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:00.157 [2024-10-08 10:57:20.660450] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:00.157 [2024-10-08 10:57:20.660489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.157 [2024-10-08 10:57:20.660542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:00.157 [2024-10-08 10:57:20.660561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.276 ms 00:26:00.157 [2024-10-08 10:57:20.660575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.157 [2024-10-08 10:57:20.661541] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:00.157 [2024-10-08 10:57:20.663555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.157 [2024-10-08 10:57:20.663646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:00.157 [2024-10-08 10:57:20.663692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.016 ms 00:26:00.157 [2024-10-08 10:57:20.663714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.157 [2024-10-08 10:57:20.663761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.157 [2024-10-08 10:57:20.663894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:00.157 [2024-10-08 10:57:20.663914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:26:00.157 [2024-10-08 10:57:20.663929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.157 [2024-10-08 10:57:20.668143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.157 [2024-10-08 10:57:20.668229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:00.157 [2024-10-08 10:57:20.668269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.154 ms 00:26:00.157 [2024-10-08 10:57:20.668286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.157 [2024-10-08 10:57:20.668326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.157 [2024-10-08 10:57:20.668344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:00.157 [2024-10-08 10:57:20.668359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:26:00.157 [2024-10-08 10:57:20.668374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.157 [2024-10-08 10:57:20.668450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.157 [2024-10-08 10:57:20.668470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:00.157 [2024-10-08 10:57:20.668486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:00.157 [2024-10-08 10:57:20.668503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.157 [2024-10-08 10:57:20.668535] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:00.157 [2024-10-08 10:57:20.669685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.157 [2024-10-08 10:57:20.669767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:00.157 [2024-10-08 10:57:20.669831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.157 ms 00:26:00.157 [2024-10-08 10:57:20.669850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.157 [2024-10-08 10:57:20.669881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.157 [2024-10-08 10:57:20.669898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:00.157 [2024-10-08 10:57:20.669949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:00.157 [2024-10-08 10:57:20.669961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.157 [2024-10-08 10:57:20.669982] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:00.157 [2024-10-08 10:57:20.669997] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:26:00.157 [2024-10-08 10:57:20.670027] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:00.157 [2024-10-08 10:57:20.670042] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:26:00.157 [2024-10-08 10:57:20.670123] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:00.157 [2024-10-08 10:57:20.670133] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:00.157 [2024-10-08 10:57:20.670143] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:26:00.157 [2024-10-08 10:57:20.670152] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:00.157 [2024-10-08 10:57:20.670159] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:00.157 [2024-10-08 10:57:20.670167] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:00.157 [2024-10-08 10:57:20.670173] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:00.157 [2024-10-08 10:57:20.670179] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:00.157 [2024-10-08 10:57:20.670184] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:00.157 [2024-10-08 10:57:20.670191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.157 [2024-10-08 10:57:20.670196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:00.157 [2024-10-08 10:57:20.670202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.210 ms 00:26:00.157 [2024-10-08 10:57:20.670212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.157 [2024-10-08 10:57:20.670280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.157 [2024-10-08 10:57:20.670289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:00.157 [2024-10-08 10:57:20.670295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:26:00.157 [2024-10-08 10:57:20.670300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.157 [2024-10-08 10:57:20.670376] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:00.157 [2024-10-08 10:57:20.670385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:00.157 [2024-10-08 10:57:20.670391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:00.157 [2024-10-08 10:57:20.670397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:00.157 [2024-10-08 10:57:20.670406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:00.157 [2024-10-08 10:57:20.670411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:00.157 [2024-10-08 10:57:20.670416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:00.157 [2024-10-08 10:57:20.670421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:00.157 [2024-10-08 10:57:20.670427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:00.157 [2024-10-08 10:57:20.670431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:00.157 [2024-10-08 10:57:20.670437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:00.157 [2024-10-08 10:57:20.670442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:00.157 [2024-10-08 10:57:20.670447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:00.157 [2024-10-08 10:57:20.670453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:00.157 [2024-10-08 10:57:20.670458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:00.157 [2024-10-08 10:57:20.670463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:00.157 [2024-10-08 10:57:20.670468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:00.157 [2024-10-08 10:57:20.670480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:00.157 [2024-10-08 10:57:20.670485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:00.157 [2024-10-08 10:57:20.670495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:00.157 [2024-10-08 10:57:20.670500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:00.157 [2024-10-08 10:57:20.670505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:00.157 [2024-10-08 10:57:20.670510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:00.157 [2024-10-08 10:57:20.670515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:00.157 [2024-10-08 10:57:20.670520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:00.157 [2024-10-08 10:57:20.670525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:00.157 [2024-10-08 10:57:20.670530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:00.157 [2024-10-08 10:57:20.670535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:00.157 [2024-10-08 10:57:20.670540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:00.157 [2024-10-08 10:57:20.670545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:00.157 [2024-10-08 10:57:20.670551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:00.157 [2024-10-08 10:57:20.670556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:00.158 [2024-10-08 10:57:20.670561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:00.158 [2024-10-08 10:57:20.670568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:00.158 [2024-10-08 10:57:20.670574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:00.158 [2024-10-08 10:57:20.670579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:00.158 [2024-10-08 10:57:20.670585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:00.158 [2024-10-08 10:57:20.670591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:00.158 [2024-10-08 10:57:20.670597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:00.158 [2024-10-08 10:57:20.670603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:00.158 [2024-10-08 10:57:20.670609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:00.158 [2024-10-08 10:57:20.670615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:00.158 [2024-10-08 10:57:20.670620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:00.158 [2024-10-08 10:57:20.670626] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:00.158 [2024-10-08 10:57:20.670632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:00.158 [2024-10-08 10:57:20.670639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:00.158 [2024-10-08 10:57:20.670645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:00.158 [2024-10-08 10:57:20.670652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:00.158 [2024-10-08 10:57:20.670658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:00.158 [2024-10-08 10:57:20.670667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:00.158 [2024-10-08 10:57:20.670673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:00.158 [2024-10-08 10:57:20.670679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:00.158 [2024-10-08 10:57:20.670685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:00.158 [2024-10-08 10:57:20.670692] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:00.158 [2024-10-08 10:57:20.670700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:00.158 [2024-10-08 10:57:20.670708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:00.158 [2024-10-08 10:57:20.670714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:00.158 [2024-10-08 10:57:20.670720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:00.158 [2024-10-08 10:57:20.670726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:00.158 [2024-10-08 10:57:20.670733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:00.158 [2024-10-08 10:57:20.670739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:00.158 [2024-10-08 10:57:20.670745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:00.158 [2024-10-08 10:57:20.670752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:00.158 [2024-10-08 10:57:20.670758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:00.158 [2024-10-08 10:57:20.670764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:00.158 [2024-10-08 10:57:20.670772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:00.158 [2024-10-08 10:57:20.670778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:00.158 [2024-10-08 10:57:20.670784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:00.158 [2024-10-08 10:57:20.670790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:00.158 [2024-10-08 10:57:20.670807] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:00.158 [2024-10-08 10:57:20.670815] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:00.158 [2024-10-08 10:57:20.670822] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:00.158 [2024-10-08 10:57:20.670829] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:00.158 [2024-10-08 10:57:20.670835] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:00.158 [2024-10-08 10:57:20.670841] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:00.158 [2024-10-08 10:57:20.670848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:00.158 [2024-10-08 10:57:20.670858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:00.158 [2024-10-08 10:57:20.670865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.526 ms 00:26:00.158 [2024-10-08 10:57:20.670874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:00.158 [2024-10-08 10:57:20.670905] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:26:00.158 [2024-10-08 10:57:20.670913] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:26:02.687 [2024-10-08 10:57:22.865118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.687 [2024-10-08 10:57:22.865177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:02.687 [2024-10-08 10:57:22.865191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2194.200 ms 00:26:02.687 [2024-10-08 10:57:22.865200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.687 [2024-10-08 10:57:22.873106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.687 [2024-10-08 10:57:22.873142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:02.687 [2024-10-08 10:57:22.873153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.826 ms 00:26:02.687 [2024-10-08 10:57:22.873160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.687 [2024-10-08 10:57:22.873201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.687 [2024-10-08 10:57:22.873210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:02.687 [2024-10-08 10:57:22.873218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:26:02.687 [2024-10-08 10:57:22.873233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.687 [2024-10-08 10:57:22.890194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.687 [2024-10-08 10:57:22.890235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:02.687 [2024-10-08 10:57:22.890248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.915 ms 00:26:02.687 [2024-10-08 10:57:22.890262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.687 [2024-10-08 10:57:22.890295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.687 [2024-10-08 10:57:22.890304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:02.687 [2024-10-08 10:57:22.890314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:02.687 [2024-10-08 10:57:22.890321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.687 [2024-10-08 10:57:22.890676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.687 [2024-10-08 10:57:22.890692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:02.687 [2024-10-08 10:57:22.890701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.294 ms 00:26:02.687 [2024-10-08 10:57:22.890709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.687 [2024-10-08 10:57:22.890748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.687 [2024-10-08 10:57:22.890757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:02.687 [2024-10-08 10:57:22.890772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:26:02.687 [2024-10-08 10:57:22.890780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.687 [2024-10-08 10:57:22.896048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.687 [2024-10-08 10:57:22.896077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:02.687 [2024-10-08 10:57:22.896086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.219 ms 00:26:02.687 [2024-10-08 10:57:22.896094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.687 [2024-10-08 10:57:22.898684] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:02.687 [2024-10-08 10:57:22.898721] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:02.687 [2024-10-08 10:57:22.898732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.687 [2024-10-08 10:57:22.898740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:26:02.687 [2024-10-08 10:57:22.898748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.557 ms 00:26:02.687 [2024-10-08 10:57:22.898755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.687 [2024-10-08 10:57:22.902824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.687 [2024-10-08 10:57:22.902854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:26:02.687 [2024-10-08 10:57:22.902869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.031 ms 00:26:02.687 [2024-10-08 10:57:22.902876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.687 [2024-10-08 10:57:22.904511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.687 [2024-10-08 10:57:22.904542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:26:02.687 [2024-10-08 10:57:22.904551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.593 ms 00:26:02.687 [2024-10-08 10:57:22.904559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.687 [2024-10-08 10:57:22.906178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.687 [2024-10-08 10:57:22.906206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:26:02.687 [2024-10-08 10:57:22.906214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.587 ms 00:26:02.687 [2024-10-08 10:57:22.906221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.687 [2024-10-08 10:57:22.906538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.687 [2024-10-08 10:57:22.906549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:02.687 [2024-10-08 10:57:22.906557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.254 ms 00:26:02.687 [2024-10-08 10:57:22.906564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.687 [2024-10-08 10:57:22.920597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.687 [2024-10-08 10:57:22.920775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:02.687 [2024-10-08 10:57:22.920815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.015 ms 00:26:02.687 [2024-10-08 10:57:22.920824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.687 [2024-10-08 10:57:22.936026] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:02.687 [2024-10-08 10:57:22.936763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.687 [2024-10-08 10:57:22.936807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:02.687 [2024-10-08 10:57:22.936824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.597 ms 00:26:02.687 [2024-10-08 10:57:22.936837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.687 [2024-10-08 10:57:22.936896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.687 [2024-10-08 10:57:22.936910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:26:02.687 [2024-10-08 10:57:22.936920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:02.687 [2024-10-08 10:57:22.936929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.687 [2024-10-08 10:57:22.936987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.687 [2024-10-08 10:57:22.936998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:02.687 [2024-10-08 10:57:22.937007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:26:02.687 [2024-10-08 10:57:22.937015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.687 [2024-10-08 10:57:22.937039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.687 [2024-10-08 10:57:22.937049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:02.687 [2024-10-08 10:57:22.937057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:02.687 [2024-10-08 10:57:22.937065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.687 [2024-10-08 10:57:22.937097] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:02.687 [2024-10-08 10:57:22.937108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.687 [2024-10-08 10:57:22.937116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:02.687 [2024-10-08 10:57:22.937124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:26:02.687 [2024-10-08 10:57:22.937132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.687 [2024-10-08 10:57:22.940500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.687 [2024-10-08 10:57:22.940536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:02.687 [2024-10-08 10:57:22.940552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.348 ms 00:26:02.687 [2024-10-08 10:57:22.940560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.687 [2024-10-08 10:57:22.940626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.687 [2024-10-08 10:57:22.940635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:02.687 [2024-10-08 10:57:22.940643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:26:02.687 [2024-10-08 10:57:22.940650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.687 [2024-10-08 10:57:22.941509] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2281.427 ms, result 0 00:26:02.687 [2024-10-08 10:57:22.957292] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:02.687 [2024-10-08 10:57:22.973282] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:02.687 [2024-10-08 10:57:22.981399] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:02.687 10:57:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:02.687 10:57:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:26:02.687 10:57:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:02.687 10:57:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:26:02.687 10:57:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:02.687 [2024-10-08 10:57:23.205485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.687 [2024-10-08 10:57:23.205529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:02.687 [2024-10-08 10:57:23.205543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:02.687 [2024-10-08 10:57:23.205551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.687 [2024-10-08 10:57:23.205580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.687 [2024-10-08 10:57:23.205589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:02.687 [2024-10-08 10:57:23.205597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:02.687 [2024-10-08 10:57:23.205604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.687 [2024-10-08 10:57:23.205627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:02.687 [2024-10-08 10:57:23.205635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:02.687 [2024-10-08 10:57:23.205644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:02.687 [2024-10-08 10:57:23.205651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:02.687 [2024-10-08 10:57:23.205706] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.213 ms, result 0 00:26:02.687 true 00:26:02.687 10:57:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:02.949 { 00:26:02.949 "name": "ftl", 00:26:02.949 "properties": [ 00:26:02.949 { 00:26:02.949 "name": "superblock_version", 00:26:02.949 "value": 5, 00:26:02.949 "read-only": true 00:26:02.949 }, 00:26:02.949 { 00:26:02.949 "name": "base_device", 00:26:02.949 "bands": [ 00:26:02.949 { 00:26:02.949 "id": 0, 00:26:02.949 "state": "CLOSED", 00:26:02.949 "validity": 1.0 00:26:02.949 }, 00:26:02.949 { 00:26:02.949 "id": 1, 00:26:02.949 "state": "CLOSED", 00:26:02.949 "validity": 1.0 00:26:02.949 }, 00:26:02.949 { 00:26:02.949 "id": 2, 00:26:02.949 "state": "CLOSED", 00:26:02.949 "validity": 0.007843137254901933 00:26:02.949 }, 00:26:02.949 { 00:26:02.949 "id": 3, 00:26:02.949 "state": "FREE", 00:26:02.949 "validity": 0.0 00:26:02.949 }, 00:26:02.949 { 00:26:02.949 "id": 4, 00:26:02.949 "state": "FREE", 00:26:02.949 "validity": 0.0 00:26:02.949 }, 00:26:02.949 { 00:26:02.949 "id": 5, 00:26:02.949 "state": "FREE", 00:26:02.949 "validity": 0.0 00:26:02.949 }, 00:26:02.949 { 00:26:02.949 "id": 6, 00:26:02.949 "state": "FREE", 00:26:02.949 "validity": 0.0 00:26:02.949 }, 00:26:02.949 { 00:26:02.949 "id": 7, 00:26:02.949 "state": "FREE", 00:26:02.950 "validity": 0.0 00:26:02.950 }, 00:26:02.950 { 00:26:02.950 "id": 8, 00:26:02.950 "state": "FREE", 00:26:02.950 "validity": 0.0 00:26:02.950 }, 00:26:02.950 { 00:26:02.950 "id": 9, 00:26:02.950 "state": "FREE", 00:26:02.950 "validity": 0.0 00:26:02.950 }, 00:26:02.950 { 00:26:02.950 "id": 10, 00:26:02.950 "state": "FREE", 00:26:02.950 "validity": 0.0 00:26:02.950 }, 00:26:02.950 { 00:26:02.950 "id": 11, 00:26:02.950 "state": "FREE", 00:26:02.950 "validity": 0.0 00:26:02.950 }, 00:26:02.950 { 00:26:02.950 "id": 12, 00:26:02.950 "state": "FREE", 00:26:02.950 "validity": 0.0 00:26:02.950 }, 00:26:02.950 { 00:26:02.950 "id": 13, 00:26:02.950 "state": "FREE", 00:26:02.950 "validity": 0.0 00:26:02.950 }, 00:26:02.950 { 00:26:02.950 "id": 14, 00:26:02.950 "state": "FREE", 00:26:02.950 "validity": 0.0 00:26:02.950 }, 00:26:02.950 { 00:26:02.950 "id": 15, 00:26:02.950 "state": "FREE", 00:26:02.950 "validity": 0.0 00:26:02.950 }, 00:26:02.950 { 00:26:02.950 "id": 16, 00:26:02.950 "state": "FREE", 00:26:02.950 "validity": 0.0 00:26:02.950 }, 00:26:02.950 { 00:26:02.950 "id": 17, 00:26:02.950 "state": "FREE", 00:26:02.950 "validity": 0.0 00:26:02.950 } 00:26:02.950 ], 00:26:02.950 "read-only": true 00:26:02.950 }, 00:26:02.950 { 00:26:02.950 "name": "cache_device", 00:26:02.950 "type": "bdev", 00:26:02.950 "chunks": [ 00:26:02.950 { 00:26:02.950 "id": 0, 00:26:02.950 "state": "INACTIVE", 00:26:02.950 "utilization": 0.0 00:26:02.950 }, 00:26:02.950 { 00:26:02.950 "id": 1, 00:26:02.950 "state": "OPEN", 00:26:02.950 "utilization": 0.0 00:26:02.950 }, 00:26:02.950 { 00:26:02.950 "id": 2, 00:26:02.950 "state": "OPEN", 00:26:02.950 "utilization": 0.0 00:26:02.950 }, 00:26:02.950 { 00:26:02.950 "id": 3, 00:26:02.950 "state": "FREE", 00:26:02.950 "utilization": 0.0 00:26:02.950 }, 00:26:02.950 { 00:26:02.950 "id": 4, 00:26:02.950 "state": "FREE", 00:26:02.950 "utilization": 0.0 00:26:02.950 } 00:26:02.950 ], 00:26:02.950 "read-only": true 00:26:02.950 }, 00:26:02.950 { 00:26:02.950 "name": "verbose_mode", 00:26:02.950 "value": true, 00:26:02.950 "unit": "", 00:26:02.950 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:02.950 }, 00:26:02.950 { 00:26:02.950 "name": "prep_upgrade_on_shutdown", 00:26:02.950 "value": false, 00:26:02.950 "unit": "", 00:26:02.950 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:02.950 } 00:26:02.950 ] 00:26:02.950 } 00:26:02.950 10:57:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:02.950 10:57:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:26:02.950 10:57:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:03.208 10:57:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:26:03.208 10:57:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:26:03.209 10:57:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:26:03.209 10:57:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:03.209 10:57:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:26:03.467 Validate MD5 checksum, iteration 1 00:26:03.467 10:57:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:26:03.467 10:57:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:26:03.467 10:57:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:26:03.467 10:57:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:03.467 10:57:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:03.467 10:57:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:03.467 10:57:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:03.467 10:57:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:03.467 10:57:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:03.467 10:57:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:03.467 10:57:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:03.467 10:57:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:03.467 10:57:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:03.467 [2024-10-08 10:57:23.920770] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:26:03.467 [2024-10-08 10:57:23.921060] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93432 ] 00:26:03.725 [2024-10-08 10:57:24.048979] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:03.725 [2024-10-08 10:57:24.068706] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.725 [2024-10-08 10:57:24.099771] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:05.104  [2024-10-08T10:57:26.248Z] Copying: 695/1024 [MB] (695 MBps) [2024-10-08T10:57:26.815Z] Copying: 1024/1024 [MB] (average 676 MBps) 00:26:06.238 00:26:06.238 10:57:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:06.238 10:57:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:08.135 10:57:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:08.135 Validate MD5 checksum, iteration 2 00:26:08.135 10:57:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=0ccd91c55da9d03013419f515f44f802 00:26:08.135 10:57:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 0ccd91c55da9d03013419f515f44f802 != \0\c\c\d\9\1\c\5\5\d\a\9\d\0\3\0\1\3\4\1\9\f\5\1\5\f\4\4\f\8\0\2 ]] 00:26:08.135 10:57:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:08.135 10:57:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:08.135 10:57:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:26:08.135 10:57:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:08.135 10:57:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:08.135 10:57:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:08.135 10:57:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:08.135 10:57:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:08.135 10:57:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:08.392 [2024-10-08 10:57:28.748612] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:26:08.392 [2024-10-08 10:57:28.748996] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93491 ] 00:26:08.392 [2024-10-08 10:57:28.876524] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:08.392 [2024-10-08 10:57:28.897114] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.392 [2024-10-08 10:57:28.935502] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:09.765  [2024-10-08T10:57:30.909Z] Copying: 707/1024 [MB] (707 MBps) [2024-10-08T10:57:31.167Z] Copying: 1024/1024 [MB] (average 699 MBps) 00:26:10.590 00:26:10.590 10:57:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:26:10.590 10:57:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:13.118 10:57:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:13.118 10:57:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=1902c472766ccd4efcca25fb294cd65c 00:26:13.118 10:57:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 1902c472766ccd4efcca25fb294cd65c != \1\9\0\2\c\4\7\2\7\6\6\c\c\d\4\e\f\c\c\a\2\5\f\b\2\9\4\c\d\6\5\c ]] 00:26:13.118 10:57:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:13.118 10:57:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:13.118 10:57:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:26:13.118 10:57:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 93381 ]] 00:26:13.118 10:57:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 93381 00:26:13.118 10:57:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:26:13.118 10:57:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:26:13.118 10:57:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:13.118 10:57:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:13.118 10:57:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:13.118 10:57:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:13.118 10:57:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=93549 00:26:13.118 10:57:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:13.118 10:57:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 93549 00:26:13.118 10:57:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 93549 ']' 00:26:13.118 10:57:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.118 10:57:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:13.118 10:57:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.118 10:57:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:13.118 10:57:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:13.118 [2024-10-08 10:57:33.268674] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:26:13.118 [2024-10-08 10:57:33.268763] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93549 ] 00:26:13.118 [2024-10-08 10:57:33.391128] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:13.118 [2024-10-08 10:57:33.409762] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.118 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 93381 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:26:13.118 [2024-10-08 10:57:33.438372] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.118 [2024-10-08 10:57:33.687168] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:13.118 [2024-10-08 10:57:33.687351] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:13.377 [2024-10-08 10:57:33.824861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.377 [2024-10-08 10:57:33.824893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:13.377 [2024-10-08 10:57:33.824904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:13.377 [2024-10-08 10:57:33.824911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.377 [2024-10-08 10:57:33.824948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.377 [2024-10-08 10:57:33.824956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:13.377 [2024-10-08 10:57:33.824962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:26:13.377 [2024-10-08 10:57:33.824967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.377 [2024-10-08 10:57:33.824985] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:13.377 [2024-10-08 10:57:33.825152] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:13.377 [2024-10-08 10:57:33.825163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.377 [2024-10-08 10:57:33.825171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:13.377 [2024-10-08 10:57:33.825177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.183 ms 00:26:13.377 [2024-10-08 10:57:33.825183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.377 [2024-10-08 10:57:33.825372] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:13.377 [2024-10-08 10:57:33.828279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.377 [2024-10-08 10:57:33.828313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:13.377 [2024-10-08 10:57:33.828321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.908 ms 00:26:13.377 [2024-10-08 10:57:33.828330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.377 [2024-10-08 10:57:33.829058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.377 [2024-10-08 10:57:33.829087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:13.377 [2024-10-08 10:57:33.829095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:26:13.377 [2024-10-08 10:57:33.829104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.377 [2024-10-08 10:57:33.829316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.377 [2024-10-08 10:57:33.829325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:13.377 [2024-10-08 10:57:33.829337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.165 ms 00:26:13.377 [2024-10-08 10:57:33.829343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.377 [2024-10-08 10:57:33.829370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.377 [2024-10-08 10:57:33.829376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:13.377 [2024-10-08 10:57:33.829382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:26:13.377 [2024-10-08 10:57:33.829387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.377 [2024-10-08 10:57:33.829406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.377 [2024-10-08 10:57:33.829412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:13.377 [2024-10-08 10:57:33.829418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:13.377 [2024-10-08 10:57:33.829429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.377 [2024-10-08 10:57:33.829443] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:13.377 [2024-10-08 10:57:33.830149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.377 [2024-10-08 10:57:33.830165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:13.377 [2024-10-08 10:57:33.830177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.708 ms 00:26:13.377 [2024-10-08 10:57:33.830183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.377 [2024-10-08 10:57:33.830201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.377 [2024-10-08 10:57:33.830206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:13.377 [2024-10-08 10:57:33.830212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:13.377 [2024-10-08 10:57:33.830219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.377 [2024-10-08 10:57:33.830235] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:13.377 [2024-10-08 10:57:33.830251] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:26:13.377 [2024-10-08 10:57:33.830278] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:13.377 [2024-10-08 10:57:33.830289] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:26:13.377 [2024-10-08 10:57:33.830368] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:13.377 [2024-10-08 10:57:33.830378] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:13.377 [2024-10-08 10:57:33.830387] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:26:13.377 [2024-10-08 10:57:33.830395] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:13.377 [2024-10-08 10:57:33.830402] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:13.377 [2024-10-08 10:57:33.830408] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:13.377 [2024-10-08 10:57:33.830413] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:13.377 [2024-10-08 10:57:33.830418] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:13.377 [2024-10-08 10:57:33.830426] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:13.377 [2024-10-08 10:57:33.830432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.377 [2024-10-08 10:57:33.830437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:13.377 [2024-10-08 10:57:33.830443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.198 ms 00:26:13.377 [2024-10-08 10:57:33.830448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.377 [2024-10-08 10:57:33.830514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.377 [2024-10-08 10:57:33.830522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:13.377 [2024-10-08 10:57:33.830527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:26:13.377 [2024-10-08 10:57:33.830534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.377 [2024-10-08 10:57:33.830608] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:13.377 [2024-10-08 10:57:33.830616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:13.377 [2024-10-08 10:57:33.830622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:13.378 [2024-10-08 10:57:33.830628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:13.378 [2024-10-08 10:57:33.830633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:13.378 [2024-10-08 10:57:33.830638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:13.378 [2024-10-08 10:57:33.830644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:13.378 [2024-10-08 10:57:33.830650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:13.378 [2024-10-08 10:57:33.830656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:13.378 [2024-10-08 10:57:33.830662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:13.378 [2024-10-08 10:57:33.830667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:13.378 [2024-10-08 10:57:33.830672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:13.378 [2024-10-08 10:57:33.830677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:13.378 [2024-10-08 10:57:33.830683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:13.378 [2024-10-08 10:57:33.830688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:13.378 [2024-10-08 10:57:33.830697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:13.378 [2024-10-08 10:57:33.830703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:13.378 [2024-10-08 10:57:33.830711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:13.378 [2024-10-08 10:57:33.830716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:13.378 [2024-10-08 10:57:33.830721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:13.378 [2024-10-08 10:57:33.830726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:13.378 [2024-10-08 10:57:33.830732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:13.378 [2024-10-08 10:57:33.830737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:13.378 [2024-10-08 10:57:33.830741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:13.378 [2024-10-08 10:57:33.830746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:13.378 [2024-10-08 10:57:33.830751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:13.378 [2024-10-08 10:57:33.830756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:13.378 [2024-10-08 10:57:33.830761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:13.378 [2024-10-08 10:57:33.830766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:13.378 [2024-10-08 10:57:33.830771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:13.378 [2024-10-08 10:57:33.830776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:13.378 [2024-10-08 10:57:33.830782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:13.378 [2024-10-08 10:57:33.830788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:13.378 [2024-10-08 10:57:33.830802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:13.378 [2024-10-08 10:57:33.830809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:13.378 [2024-10-08 10:57:33.830814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:13.378 [2024-10-08 10:57:33.830820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:13.378 [2024-10-08 10:57:33.830826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:13.378 [2024-10-08 10:57:33.830832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:13.378 [2024-10-08 10:57:33.830838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:13.378 [2024-10-08 10:57:33.830844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:13.378 [2024-10-08 10:57:33.830849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:13.378 [2024-10-08 10:57:33.830855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:13.378 [2024-10-08 10:57:33.830860] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:13.378 [2024-10-08 10:57:33.830869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:13.378 [2024-10-08 10:57:33.830878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:13.378 [2024-10-08 10:57:33.830884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:13.378 [2024-10-08 10:57:33.830892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:13.378 [2024-10-08 10:57:33.830898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:13.378 [2024-10-08 10:57:33.830904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:13.378 [2024-10-08 10:57:33.830911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:13.378 [2024-10-08 10:57:33.830916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:13.378 [2024-10-08 10:57:33.830922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:13.378 [2024-10-08 10:57:33.830929] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:13.378 [2024-10-08 10:57:33.830937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:13.378 [2024-10-08 10:57:33.830946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:13.378 [2024-10-08 10:57:33.830953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:13.378 [2024-10-08 10:57:33.830959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:13.378 [2024-10-08 10:57:33.830965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:13.378 [2024-10-08 10:57:33.830971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:13.378 [2024-10-08 10:57:33.830977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:13.378 [2024-10-08 10:57:33.830984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:13.378 [2024-10-08 10:57:33.830990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:13.378 [2024-10-08 10:57:33.830998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:13.378 [2024-10-08 10:57:33.831004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:13.378 [2024-10-08 10:57:33.831011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:13.378 [2024-10-08 10:57:33.831017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:13.378 [2024-10-08 10:57:33.831023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:13.378 [2024-10-08 10:57:33.831029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:13.378 [2024-10-08 10:57:33.831035] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:13.378 [2024-10-08 10:57:33.831044] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:13.378 [2024-10-08 10:57:33.831051] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:13.378 [2024-10-08 10:57:33.831057] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:13.378 [2024-10-08 10:57:33.831062] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:13.378 [2024-10-08 10:57:33.831067] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:13.378 [2024-10-08 10:57:33.831073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.378 [2024-10-08 10:57:33.831078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:13.378 [2024-10-08 10:57:33.831083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.516 ms 00:26:13.378 [2024-10-08 10:57:33.831090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.378 [2024-10-08 10:57:33.836873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.378 [2024-10-08 10:57:33.836974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:13.378 [2024-10-08 10:57:33.836991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.746 ms 00:26:13.378 [2024-10-08 10:57:33.836997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.378 [2024-10-08 10:57:33.837023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.378 [2024-10-08 10:57:33.837117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:13.378 [2024-10-08 10:57:33.837123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:26:13.378 [2024-10-08 10:57:33.837132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.378 [2024-10-08 10:57:33.854109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.378 [2024-10-08 10:57:33.854140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:13.378 [2024-10-08 10:57:33.854150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.937 ms 00:26:13.378 [2024-10-08 10:57:33.854156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.378 [2024-10-08 10:57:33.854188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.378 [2024-10-08 10:57:33.854196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:13.378 [2024-10-08 10:57:33.854206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:13.378 [2024-10-08 10:57:33.854211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.378 [2024-10-08 10:57:33.854288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.378 [2024-10-08 10:57:33.854296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:13.378 [2024-10-08 10:57:33.854302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:26:13.378 [2024-10-08 10:57:33.854309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.378 [2024-10-08 10:57:33.854338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.378 [2024-10-08 10:57:33.854344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:13.378 [2024-10-08 10:57:33.854350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:26:13.378 [2024-10-08 10:57:33.854356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.378 [2024-10-08 10:57:33.859099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.378 [2024-10-08 10:57:33.859138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:13.378 [2024-10-08 10:57:33.859152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.725 ms 00:26:13.378 [2024-10-08 10:57:33.859163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.378 [2024-10-08 10:57:33.859271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.378 [2024-10-08 10:57:33.859292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:26:13.378 [2024-10-08 10:57:33.859305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:13.378 [2024-10-08 10:57:33.859315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.379 [2024-10-08 10:57:33.863118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.379 [2024-10-08 10:57:33.863159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:26:13.379 [2024-10-08 10:57:33.863172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.763 ms 00:26:13.379 [2024-10-08 10:57:33.863183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.379 [2024-10-08 10:57:33.864676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.379 [2024-10-08 10:57:33.864713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:13.379 [2024-10-08 10:57:33.864726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.386 ms 00:26:13.379 [2024-10-08 10:57:33.864736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.379 [2024-10-08 10:57:33.877994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.379 [2024-10-08 10:57:33.878027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:13.379 [2024-10-08 10:57:33.878037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.161 ms 00:26:13.379 [2024-10-08 10:57:33.878043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.379 [2024-10-08 10:57:33.878143] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:26:13.379 [2024-10-08 10:57:33.878211] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:26:13.379 [2024-10-08 10:57:33.878275] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:26:13.379 [2024-10-08 10:57:33.878342] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:26:13.379 [2024-10-08 10:57:33.878349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.379 [2024-10-08 10:57:33.878355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:26:13.379 [2024-10-08 10:57:33.878361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.270 ms 00:26:13.379 [2024-10-08 10:57:33.878367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.379 [2024-10-08 10:57:33.878410] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:26:13.379 [2024-10-08 10:57:33.878419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.379 [2024-10-08 10:57:33.878424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:26:13.379 [2024-10-08 10:57:33.878431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:13.379 [2024-10-08 10:57:33.878437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.379 [2024-10-08 10:57:33.880271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.379 [2024-10-08 10:57:33.880302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:26:13.379 [2024-10-08 10:57:33.880310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.817 ms 00:26:13.379 [2024-10-08 10:57:33.880316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.379 [2024-10-08 10:57:33.880781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.379 [2024-10-08 10:57:33.880818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:26:13.379 [2024-10-08 10:57:33.880826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:13.379 [2024-10-08 10:57:33.880832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.379 [2024-10-08 10:57:33.880874] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:26:13.379 [2024-10-08 10:57:33.880991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.379 [2024-10-08 10:57:33.880999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:26:13.379 [2024-10-08 10:57:33.881006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.118 ms 00:26:13.379 [2024-10-08 10:57:33.881012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.948 [2024-10-08 10:57:34.322534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.948 [2024-10-08 10:57:34.322595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:26:13.948 [2024-10-08 10:57:34.322609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 441.296 ms 00:26:13.948 [2024-10-08 10:57:34.322617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.948 [2024-10-08 10:57:34.323701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.948 [2024-10-08 10:57:34.323739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:26:13.948 [2024-10-08 10:57:34.323751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.682 ms 00:26:13.948 [2024-10-08 10:57:34.323765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.948 [2024-10-08 10:57:34.324115] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:26:13.948 [2024-10-08 10:57:34.324188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.948 [2024-10-08 10:57:34.324198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:26:13.948 [2024-10-08 10:57:34.324207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.394 ms 00:26:13.948 [2024-10-08 10:57:34.324214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.948 [2024-10-08 10:57:34.324319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.948 [2024-10-08 10:57:34.324330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:26:13.948 [2024-10-08 10:57:34.324338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:13.948 [2024-10-08 10:57:34.324349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:13.948 [2024-10-08 10:57:34.324386] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 443.505 ms, result 0 00:26:13.948 [2024-10-08 10:57:34.324424] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:26:13.948 [2024-10-08 10:57:34.324487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:13.948 [2024-10-08 10:57:34.324497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:26:13.948 [2024-10-08 10:57:34.324510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.064 ms 00:26:13.948 [2024-10-08 10:57:34.324518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:14.514 [2024-10-08 10:57:34.794710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:14.514 [2024-10-08 10:57:34.794767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:26:14.514 [2024-10-08 10:57:34.794779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 469.821 ms 00:26:14.514 [2024-10-08 10:57:34.794785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:14.514 [2024-10-08 10:57:34.795922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:14.514 [2024-10-08 10:57:34.795951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:26:14.514 [2024-10-08 10:57:34.795960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.722 ms 00:26:14.514 [2024-10-08 10:57:34.795966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:14.514 [2024-10-08 10:57:34.796231] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:26:14.514 [2024-10-08 10:57:34.796253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:14.514 [2024-10-08 10:57:34.796260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:26:14.514 [2024-10-08 10:57:34.796267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.266 ms 00:26:14.514 [2024-10-08 10:57:34.796273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:14.514 [2024-10-08 10:57:34.796294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:14.514 [2024-10-08 10:57:34.796302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:26:14.514 [2024-10-08 10:57:34.796309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:14.514 [2024-10-08 10:57:34.796314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:14.514 [2024-10-08 10:57:34.796343] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 471.916 ms, result 0 00:26:14.515 [2024-10-08 10:57:34.796381] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:14.515 [2024-10-08 10:57:34.796389] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:14.515 [2024-10-08 10:57:34.796396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:14.515 [2024-10-08 10:57:34.796403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:26:14.515 [2024-10-08 10:57:34.796409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 915.534 ms 00:26:14.515 [2024-10-08 10:57:34.796418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:14.515 [2024-10-08 10:57:34.796442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:14.515 [2024-10-08 10:57:34.796451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:26:14.515 [2024-10-08 10:57:34.796457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:14.515 [2024-10-08 10:57:34.796463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:14.515 [2024-10-08 10:57:34.802682] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:14.515 [2024-10-08 10:57:34.802877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:14.515 [2024-10-08 10:57:34.802889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:14.515 [2024-10-08 10:57:34.802897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.402 ms 00:26:14.515 [2024-10-08 10:57:34.802906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:14.515 [2024-10-08 10:57:34.803443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:14.515 [2024-10-08 10:57:34.803460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:26:14.515 [2024-10-08 10:57:34.803468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.478 ms 00:26:14.515 [2024-10-08 10:57:34.803475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:14.515 [2024-10-08 10:57:34.805189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:14.515 [2024-10-08 10:57:34.805205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:26:14.515 [2024-10-08 10:57:34.805212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.696 ms 00:26:14.515 [2024-10-08 10:57:34.805221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:14.515 [2024-10-08 10:57:34.805255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:14.515 [2024-10-08 10:57:34.805262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:26:14.515 [2024-10-08 10:57:34.805269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:14.515 [2024-10-08 10:57:34.805277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:14.515 [2024-10-08 10:57:34.805359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:14.515 [2024-10-08 10:57:34.805366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:14.515 [2024-10-08 10:57:34.805372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:26:14.515 [2024-10-08 10:57:34.805377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:14.515 [2024-10-08 10:57:34.805397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:14.515 [2024-10-08 10:57:34.805404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:14.515 [2024-10-08 10:57:34.805410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:14.515 [2024-10-08 10:57:34.805416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:14.515 [2024-10-08 10:57:34.805437] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:14.515 [2024-10-08 10:57:34.805446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:14.515 [2024-10-08 10:57:34.805452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:14.515 [2024-10-08 10:57:34.805458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:14.515 [2024-10-08 10:57:34.805468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:14.515 [2024-10-08 10:57:34.805507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:14.515 [2024-10-08 10:57:34.805514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:14.515 [2024-10-08 10:57:34.805519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:26:14.515 [2024-10-08 10:57:34.805525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:14.515 [2024-10-08 10:57:34.806245] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 981.067 ms, result 0 00:26:14.515 [2024-10-08 10:57:34.819136] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:14.515 [2024-10-08 10:57:34.835120] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:14.515 [2024-10-08 10:57:34.843223] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:14.515 Validate MD5 checksum, iteration 1 00:26:14.515 10:57:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:14.515 10:57:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:26:14.515 10:57:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:14.515 10:57:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:26:14.515 10:57:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:26:14.515 10:57:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:14.515 10:57:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:14.515 10:57:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:14.515 10:57:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:14.515 10:57:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:14.515 10:57:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:14.515 10:57:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:14.515 10:57:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:14.515 10:57:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:14.515 10:57:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:14.515 [2024-10-08 10:57:34.940630] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:26:14.515 [2024-10-08 10:57:34.940871] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93565 ] 00:26:14.515 [2024-10-08 10:57:35.069318] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:14.774 [2024-10-08 10:57:35.090558] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.774 [2024-10-08 10:57:35.122885] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:16.220  [2024-10-08T10:57:37.055Z] Copying: 747/1024 [MB] (747 MBps) [2024-10-08T10:57:37.316Z] Copying: 1024/1024 [MB] (average 730 MBps) 00:26:16.739 00:26:17.087 10:57:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:17.087 10:57:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:18.984 10:57:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:18.984 10:57:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=0ccd91c55da9d03013419f515f44f802 00:26:18.984 10:57:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 0ccd91c55da9d03013419f515f44f802 != \0\c\c\d\9\1\c\5\5\d\a\9\d\0\3\0\1\3\4\1\9\f\5\1\5\f\4\4\f\8\0\2 ]] 00:26:18.984 10:57:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:18.984 10:57:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:18.984 10:57:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:26:18.984 Validate MD5 checksum, iteration 2 00:26:18.984 10:57:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:18.984 10:57:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:18.984 10:57:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:18.984 10:57:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:18.984 10:57:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:18.984 10:57:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:18.984 [2024-10-08 10:57:39.532060] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:26:18.984 [2024-10-08 10:57:39.532308] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93622 ] 00:26:19.243 [2024-10-08 10:57:39.659971] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:19.243 [2024-10-08 10:57:39.682869] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.243 [2024-10-08 10:57:39.714177] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.625  [2024-10-08T10:57:42.145Z] Copying: 120/1024 [MB] (120 MBps) [2024-10-08T10:57:43.080Z] Copying: 172/1024 [MB] (52 MBps) [2024-10-08T10:57:43.646Z] Copying: 680/1024 [MB] (508 MBps) [2024-10-08T10:57:44.590Z] Copying: 1024/1024 [MB] (average 291 MBps) 00:26:24.013 00:26:24.013 10:57:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:26:24.013 10:57:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:26.554 10:57:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:26.554 10:57:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=1902c472766ccd4efcca25fb294cd65c 00:26:26.554 10:57:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 1902c472766ccd4efcca25fb294cd65c != \1\9\0\2\c\4\7\2\7\6\6\c\c\d\4\e\f\c\c\a\2\5\f\b\2\9\4\c\d\6\5\c ]] 00:26:26.554 10:57:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:26.554 10:57:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:26.554 10:57:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:26:26.554 10:57:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:26:26.554 10:57:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:26:26.554 10:57:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:26.554 10:57:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:26:26.554 10:57:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:26:26.554 10:57:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:26:26.554 10:57:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:26:26.554 10:57:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 93549 ]] 00:26:26.554 10:57:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 93549 00:26:26.554 10:57:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 93549 ']' 00:26:26.554 10:57:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 93549 00:26:26.554 10:57:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:26:26.554 10:57:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:26.554 10:57:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93549 00:26:26.554 killing process with pid 93549 00:26:26.555 10:57:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:26.555 10:57:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:26.555 10:57:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93549' 00:26:26.555 10:57:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 93549 00:26:26.555 10:57:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 93549 00:26:26.555 [2024-10-08 10:57:46.851590] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:26:26.555 [2024-10-08 10:57:46.856149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:26.555 [2024-10-08 10:57:46.856181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:26:26.555 [2024-10-08 10:57:46.856194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:26.555 [2024-10-08 10:57:46.856201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.555 [2024-10-08 10:57:46.856217] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:26:26.555 [2024-10-08 10:57:46.856584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:26.555 [2024-10-08 10:57:46.856600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:26:26.555 [2024-10-08 10:57:46.856612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.356 ms 00:26:26.555 [2024-10-08 10:57:46.856617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.555 [2024-10-08 10:57:46.856809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:26.555 [2024-10-08 10:57:46.856819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:26:26.555 [2024-10-08 10:57:46.856826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.174 ms 00:26:26.555 [2024-10-08 10:57:46.856831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.555 [2024-10-08 10:57:46.857962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:26.555 [2024-10-08 10:57:46.857985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:26:26.555 [2024-10-08 10:57:46.857992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.117 ms 00:26:26.555 [2024-10-08 10:57:46.857998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.555 [2024-10-08 10:57:46.858879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:26.555 [2024-10-08 10:57:46.858898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:26:26.555 [2024-10-08 10:57:46.858909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.858 ms 00:26:26.555 [2024-10-08 10:57:46.858916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.555 [2024-10-08 10:57:46.860401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:26.555 [2024-10-08 10:57:46.860438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:26:26.555 [2024-10-08 10:57:46.860447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.457 ms 00:26:26.555 [2024-10-08 10:57:46.860454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.555 [2024-10-08 10:57:46.861625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:26.555 [2024-10-08 10:57:46.861653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:26:26.555 [2024-10-08 10:57:46.861665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.143 ms 00:26:26.555 [2024-10-08 10:57:46.861671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.555 [2024-10-08 10:57:46.861748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:26.555 [2024-10-08 10:57:46.861762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:26:26.555 [2024-10-08 10:57:46.861769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:26:26.555 [2024-10-08 10:57:46.861774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.555 [2024-10-08 10:57:46.862990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:26.555 [2024-10-08 10:57:46.863015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:26:26.555 [2024-10-08 10:57:46.863022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.202 ms 00:26:26.555 [2024-10-08 10:57:46.863027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.555 [2024-10-08 10:57:46.864102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:26.555 [2024-10-08 10:57:46.864127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:26:26.555 [2024-10-08 10:57:46.864134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.050 ms 00:26:26.555 [2024-10-08 10:57:46.864139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.555 [2024-10-08 10:57:46.865018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:26.555 [2024-10-08 10:57:46.865044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:26:26.555 [2024-10-08 10:57:46.865051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.855 ms 00:26:26.555 [2024-10-08 10:57:46.865056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.555 [2024-10-08 10:57:46.865966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:26.555 [2024-10-08 10:57:46.865992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:26:26.555 [2024-10-08 10:57:46.865999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.864 ms 00:26:26.555 [2024-10-08 10:57:46.866004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.555 [2024-10-08 10:57:46.866027] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:26:26.555 [2024-10-08 10:57:46.866038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:26.555 [2024-10-08 10:57:46.866046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:26:26.555 [2024-10-08 10:57:46.866052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:26:26.555 [2024-10-08 10:57:46.866059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:26.555 [2024-10-08 10:57:46.866065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:26.555 [2024-10-08 10:57:46.866071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:26.555 [2024-10-08 10:57:46.866077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:26.555 [2024-10-08 10:57:46.866082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:26.555 [2024-10-08 10:57:46.866088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:26.555 [2024-10-08 10:57:46.866094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:26.555 [2024-10-08 10:57:46.866100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:26.555 [2024-10-08 10:57:46.866106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:26.555 [2024-10-08 10:57:46.866111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:26.555 [2024-10-08 10:57:46.866117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:26.555 [2024-10-08 10:57:46.866122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:26.555 [2024-10-08 10:57:46.866128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:26.555 [2024-10-08 10:57:46.866133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:26.555 [2024-10-08 10:57:46.866139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:26.555 [2024-10-08 10:57:46.866146] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:26:26.555 [2024-10-08 10:57:46.866156] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: f77c43c4-6b24-4e97-a2fc-6dd276304a9e 00:26:26.555 [2024-10-08 10:57:46.866162] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:26:26.555 [2024-10-08 10:57:46.866168] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:26:26.555 [2024-10-08 10:57:46.866173] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:26:26.555 [2024-10-08 10:57:46.866179] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:26:26.555 [2024-10-08 10:57:46.866184] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:26:26.555 [2024-10-08 10:57:46.866194] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:26:26.555 [2024-10-08 10:57:46.866199] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:26:26.555 [2024-10-08 10:57:46.866204] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:26:26.555 [2024-10-08 10:57:46.866209] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:26:26.555 [2024-10-08 10:57:46.866215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:26.555 [2024-10-08 10:57:46.866221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:26:26.555 [2024-10-08 10:57:46.866227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.188 ms 00:26:26.555 [2024-10-08 10:57:46.866233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.555 [2024-10-08 10:57:46.867413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:26.555 [2024-10-08 10:57:46.867435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:26:26.555 [2024-10-08 10:57:46.867442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.165 ms 00:26:26.555 [2024-10-08 10:57:46.867448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.555 [2024-10-08 10:57:46.867519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:26.555 [2024-10-08 10:57:46.867526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:26:26.555 [2024-10-08 10:57:46.867532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:26:26.555 [2024-10-08 10:57:46.867540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.555 [2024-10-08 10:57:46.871996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:26.555 [2024-10-08 10:57:46.872021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:26.556 [2024-10-08 10:57:46.872028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:26.556 [2024-10-08 10:57:46.872034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.556 [2024-10-08 10:57:46.872056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:26.556 [2024-10-08 10:57:46.872062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:26.556 [2024-10-08 10:57:46.872068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:26.556 [2024-10-08 10:57:46.872077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.556 [2024-10-08 10:57:46.872126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:26.556 [2024-10-08 10:57:46.872134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:26.556 [2024-10-08 10:57:46.872140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:26.556 [2024-10-08 10:57:46.872146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.556 [2024-10-08 10:57:46.872163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:26.556 [2024-10-08 10:57:46.872169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:26.556 [2024-10-08 10:57:46.872175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:26.556 [2024-10-08 10:57:46.872181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.556 [2024-10-08 10:57:46.879686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:26.556 [2024-10-08 10:57:46.879725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:26.556 [2024-10-08 10:57:46.879736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:26.556 [2024-10-08 10:57:46.879745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.556 [2024-10-08 10:57:46.885761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:26.556 [2024-10-08 10:57:46.885792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:26.556 [2024-10-08 10:57:46.885814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:26.556 [2024-10-08 10:57:46.885836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.556 [2024-10-08 10:57:46.885874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:26.556 [2024-10-08 10:57:46.885881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:26.556 [2024-10-08 10:57:46.885887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:26.556 [2024-10-08 10:57:46.885893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.556 [2024-10-08 10:57:46.885932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:26.556 [2024-10-08 10:57:46.885939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:26.556 [2024-10-08 10:57:46.885948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:26.556 [2024-10-08 10:57:46.885954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.556 [2024-10-08 10:57:46.886008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:26.556 [2024-10-08 10:57:46.886016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:26.556 [2024-10-08 10:57:46.886022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:26.556 [2024-10-08 10:57:46.886028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.556 [2024-10-08 10:57:46.886051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:26.556 [2024-10-08 10:57:46.886058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:26:26.556 [2024-10-08 10:57:46.886063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:26.556 [2024-10-08 10:57:46.886069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.556 [2024-10-08 10:57:46.886101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:26.556 [2024-10-08 10:57:46.886108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:26.556 [2024-10-08 10:57:46.886113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:26.556 [2024-10-08 10:57:46.886119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.556 [2024-10-08 10:57:46.886154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:26.556 [2024-10-08 10:57:46.886161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:26.556 [2024-10-08 10:57:46.886167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:26.556 [2024-10-08 10:57:46.886173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:26.556 [2024-10-08 10:57:46.886269] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 30.094 ms, result 0 00:26:26.556 10:57:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:26:26.556 10:57:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:26.556 10:57:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:26:26.556 10:57:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:26:26.556 10:57:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:26:26.556 10:57:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:26.556 Remove shared memory files 00:26:26.556 10:57:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:26:26.556 10:57:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:26.556 10:57:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:26:26.556 10:57:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:26:26.556 10:57:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid93381 00:26:26.556 10:57:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:26.556 10:57:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:26:26.556 00:26:26.556 real 1m7.994s 00:26:26.556 user 1m34.855s 00:26:26.556 sys 0m17.348s 00:26:26.556 ************************************ 00:26:26.556 END TEST ftl_upgrade_shutdown 00:26:26.556 ************************************ 00:26:26.556 10:57:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:26.556 10:57:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:26.556 10:57:47 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:26:26.556 10:57:47 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:26:26.556 10:57:47 ftl -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:26:26.556 10:57:47 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:26.556 10:57:47 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:26.556 ************************************ 00:26:26.556 START TEST ftl_restore_fast 00:26:26.556 ************************************ 00:26:26.556 10:57:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:26:26.817 * Looking for test storage... 00:26:26.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:26.817 10:57:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:26.817 10:57:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:26.817 10:57:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lcov --version 00:26:26.817 10:57:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:26.817 10:57:47 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:26.817 10:57:47 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:26.817 10:57:47 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:26.817 10:57:47 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:26:26.817 10:57:47 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:26:26.817 10:57:47 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:26:26.817 10:57:47 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:26:26.817 10:57:47 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:26:26.817 10:57:47 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:26:26.817 10:57:47 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:26:26.817 10:57:47 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:26.817 10:57:47 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:26:26.817 10:57:47 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:26:26.817 10:57:47 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:26.817 10:57:47 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:26.817 10:57:47 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:26:26.817 10:57:47 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:26:26.817 10:57:47 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:26.817 10:57:47 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:26:26.817 10:57:47 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:26:26.817 10:57:47 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:26:26.817 10:57:47 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:26:26.817 10:57:47 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:26.817 10:57:47 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:26:26.817 10:57:47 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:26:26.817 10:57:47 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:26.817 10:57:47 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:26.817 10:57:47 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:26:26.817 10:57:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:26.817 10:57:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:26.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.817 --rc genhtml_branch_coverage=1 00:26:26.817 --rc genhtml_function_coverage=1 00:26:26.817 --rc genhtml_legend=1 00:26:26.817 --rc geninfo_all_blocks=1 00:26:26.817 --rc geninfo_unexecuted_blocks=1 00:26:26.817 00:26:26.817 ' 00:26:26.817 10:57:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:26.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.817 --rc genhtml_branch_coverage=1 00:26:26.817 --rc genhtml_function_coverage=1 00:26:26.817 --rc genhtml_legend=1 00:26:26.817 --rc geninfo_all_blocks=1 00:26:26.817 --rc geninfo_unexecuted_blocks=1 00:26:26.817 00:26:26.817 ' 00:26:26.817 10:57:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:26.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.817 --rc genhtml_branch_coverage=1 00:26:26.817 --rc genhtml_function_coverage=1 00:26:26.817 --rc genhtml_legend=1 00:26:26.817 --rc geninfo_all_blocks=1 00:26:26.817 --rc geninfo_unexecuted_blocks=1 00:26:26.817 00:26:26.817 ' 00:26:26.817 10:57:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:26.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.818 --rc genhtml_branch_coverage=1 00:26:26.818 --rc genhtml_function_coverage=1 00:26:26.818 --rc genhtml_legend=1 00:26:26.818 --rc geninfo_all_blocks=1 00:26:26.818 --rc geninfo_unexecuted_blocks=1 00:26:26.818 00:26:26.818 ' 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.IMga5S5PnT 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=93779 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 93779 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # '[' -z 93779 ']' 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:26.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:26:26.818 10:57:47 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:26.818 [2024-10-08 10:57:47.364670] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:26:26.818 [2024-10-08 10:57:47.364784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93779 ] 00:26:27.079 [2024-10-08 10:57:47.492236] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:27.079 [2024-10-08 10:57:47.505224] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.079 [2024-10-08 10:57:47.538060] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:27.647 10:57:48 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:27.647 10:57:48 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # return 0 00:26:27.647 10:57:48 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:27.647 10:57:48 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:26:27.647 10:57:48 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:27.647 10:57:48 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:26:27.647 10:57:48 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:26:27.647 10:57:48 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:27.907 10:57:48 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:27.907 10:57:48 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:26:27.907 10:57:48 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:27.907 10:57:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:26:27.907 10:57:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:27.907 10:57:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:26:27.907 10:57:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:26:27.907 10:57:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:26:28.165 10:57:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:28.165 { 00:26:28.165 "name": "nvme0n1", 00:26:28.165 "aliases": [ 00:26:28.165 "a38b372a-ac00-4eea-bd41-b010a29a3077" 00:26:28.165 ], 00:26:28.165 "product_name": "NVMe disk", 00:26:28.165 "block_size": 4096, 00:26:28.165 "num_blocks": 1310720, 00:26:28.166 "uuid": "a38b372a-ac00-4eea-bd41-b010a29a3077", 00:26:28.166 "numa_id": -1, 00:26:28.166 "assigned_rate_limits": { 00:26:28.166 "rw_ios_per_sec": 0, 00:26:28.166 "rw_mbytes_per_sec": 0, 00:26:28.166 "r_mbytes_per_sec": 0, 00:26:28.166 "w_mbytes_per_sec": 0 00:26:28.166 }, 00:26:28.166 "claimed": true, 00:26:28.166 "claim_type": "read_many_write_one", 00:26:28.166 "zoned": false, 00:26:28.166 "supported_io_types": { 00:26:28.166 "read": true, 00:26:28.166 "write": true, 00:26:28.166 "unmap": true, 00:26:28.166 "flush": true, 00:26:28.166 "reset": true, 00:26:28.166 "nvme_admin": true, 00:26:28.166 "nvme_io": true, 00:26:28.166 "nvme_io_md": false, 00:26:28.166 "write_zeroes": true, 00:26:28.166 "zcopy": false, 00:26:28.166 "get_zone_info": false, 00:26:28.166 "zone_management": false, 00:26:28.166 "zone_append": false, 00:26:28.166 "compare": true, 00:26:28.166 "compare_and_write": false, 00:26:28.166 "abort": true, 00:26:28.166 "seek_hole": false, 00:26:28.166 "seek_data": false, 00:26:28.166 "copy": true, 00:26:28.166 "nvme_iov_md": false 00:26:28.166 }, 00:26:28.166 "driver_specific": { 00:26:28.166 "nvme": [ 00:26:28.166 { 00:26:28.166 "pci_address": "0000:00:11.0", 00:26:28.166 "trid": { 00:26:28.166 "trtype": "PCIe", 00:26:28.166 "traddr": "0000:00:11.0" 00:26:28.166 }, 00:26:28.166 "ctrlr_data": { 00:26:28.166 "cntlid": 0, 00:26:28.166 "vendor_id": "0x1b36", 00:26:28.166 "model_number": "QEMU NVMe Ctrl", 00:26:28.166 "serial_number": "12341", 00:26:28.166 "firmware_revision": "8.0.0", 00:26:28.166 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:28.166 "oacs": { 00:26:28.166 "security": 0, 00:26:28.166 "format": 1, 00:26:28.166 "firmware": 0, 00:26:28.166 "ns_manage": 1 00:26:28.166 }, 00:26:28.166 "multi_ctrlr": false, 00:26:28.166 "ana_reporting": false 00:26:28.166 }, 00:26:28.166 "vs": { 00:26:28.166 "nvme_version": "1.4" 00:26:28.166 }, 00:26:28.166 "ns_data": { 00:26:28.166 "id": 1, 00:26:28.166 "can_share": false 00:26:28.166 } 00:26:28.166 } 00:26:28.166 ], 00:26:28.166 "mp_policy": "active_passive" 00:26:28.166 } 00:26:28.166 } 00:26:28.166 ]' 00:26:28.166 10:57:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:28.166 10:57:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:26:28.166 10:57:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:28.166 10:57:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:26:28.166 10:57:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:26:28.166 10:57:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:26:28.166 10:57:48 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:26:28.166 10:57:48 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:26:28.166 10:57:48 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:26:28.166 10:57:48 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:28.166 10:57:48 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:28.427 10:57:48 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=19d0fe98-dc8a-4757-ae67-8d3638dd09fe 00:26:28.427 10:57:48 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:26:28.427 10:57:48 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 19d0fe98-dc8a-4757-ae67-8d3638dd09fe 00:26:28.685 10:57:49 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:28.943 10:57:49 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=ec1eeb8d-64a6-40b4-a626-5f70d49df9f2 00:26:28.943 10:57:49 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ec1eeb8d-64a6-40b4-a626-5f70d49df9f2 00:26:29.200 10:57:49 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=3752413d-0efe-4042-b2ba-d058709dbedd 00:26:29.200 10:57:49 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:26:29.200 10:57:49 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 3752413d-0efe-4042-b2ba-d058709dbedd 00:26:29.200 10:57:49 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:26:29.200 10:57:49 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:29.200 10:57:49 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=3752413d-0efe-4042-b2ba-d058709dbedd 00:26:29.200 10:57:49 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:26:29.200 10:57:49 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 3752413d-0efe-4042-b2ba-d058709dbedd 00:26:29.200 10:57:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=3752413d-0efe-4042-b2ba-d058709dbedd 00:26:29.200 10:57:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:29.200 10:57:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:26:29.200 10:57:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:26:29.200 10:57:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3752413d-0efe-4042-b2ba-d058709dbedd 00:26:29.200 10:57:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:29.200 { 00:26:29.200 "name": "3752413d-0efe-4042-b2ba-d058709dbedd", 00:26:29.200 "aliases": [ 00:26:29.200 "lvs/nvme0n1p0" 00:26:29.200 ], 00:26:29.200 "product_name": "Logical Volume", 00:26:29.200 "block_size": 4096, 00:26:29.200 "num_blocks": 26476544, 00:26:29.200 "uuid": "3752413d-0efe-4042-b2ba-d058709dbedd", 00:26:29.201 "assigned_rate_limits": { 00:26:29.201 "rw_ios_per_sec": 0, 00:26:29.201 "rw_mbytes_per_sec": 0, 00:26:29.201 "r_mbytes_per_sec": 0, 00:26:29.201 "w_mbytes_per_sec": 0 00:26:29.201 }, 00:26:29.201 "claimed": false, 00:26:29.201 "zoned": false, 00:26:29.201 "supported_io_types": { 00:26:29.201 "read": true, 00:26:29.201 "write": true, 00:26:29.201 "unmap": true, 00:26:29.201 "flush": false, 00:26:29.201 "reset": true, 00:26:29.201 "nvme_admin": false, 00:26:29.201 "nvme_io": false, 00:26:29.201 "nvme_io_md": false, 00:26:29.201 "write_zeroes": true, 00:26:29.201 "zcopy": false, 00:26:29.201 "get_zone_info": false, 00:26:29.201 "zone_management": false, 00:26:29.201 "zone_append": false, 00:26:29.201 "compare": false, 00:26:29.201 "compare_and_write": false, 00:26:29.201 "abort": false, 00:26:29.201 "seek_hole": true, 00:26:29.201 "seek_data": true, 00:26:29.201 "copy": false, 00:26:29.201 "nvme_iov_md": false 00:26:29.201 }, 00:26:29.201 "driver_specific": { 00:26:29.201 "lvol": { 00:26:29.201 "lvol_store_uuid": "ec1eeb8d-64a6-40b4-a626-5f70d49df9f2", 00:26:29.201 "base_bdev": "nvme0n1", 00:26:29.201 "thin_provision": true, 00:26:29.201 "num_allocated_clusters": 0, 00:26:29.201 "snapshot": false, 00:26:29.201 "clone": false, 00:26:29.201 "esnap_clone": false 00:26:29.201 } 00:26:29.201 } 00:26:29.201 } 00:26:29.201 ]' 00:26:29.201 10:57:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:29.458 10:57:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:26:29.458 10:57:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:29.458 10:57:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:26:29.458 10:57:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:26:29.458 10:57:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:26:29.458 10:57:49 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:26:29.458 10:57:49 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:26:29.458 10:57:49 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:26:29.458 10:57:50 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:26:29.458 10:57:50 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:26:29.458 10:57:50 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 3752413d-0efe-4042-b2ba-d058709dbedd 00:26:29.458 10:57:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=3752413d-0efe-4042-b2ba-d058709dbedd 00:26:29.458 10:57:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:29.458 10:57:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:26:29.458 10:57:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:26:29.458 10:57:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3752413d-0efe-4042-b2ba-d058709dbedd 00:26:29.716 10:57:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:29.716 { 00:26:29.716 "name": "3752413d-0efe-4042-b2ba-d058709dbedd", 00:26:29.716 "aliases": [ 00:26:29.716 "lvs/nvme0n1p0" 00:26:29.716 ], 00:26:29.716 "product_name": "Logical Volume", 00:26:29.716 "block_size": 4096, 00:26:29.716 "num_blocks": 26476544, 00:26:29.716 "uuid": "3752413d-0efe-4042-b2ba-d058709dbedd", 00:26:29.716 "assigned_rate_limits": { 00:26:29.716 "rw_ios_per_sec": 0, 00:26:29.716 "rw_mbytes_per_sec": 0, 00:26:29.716 "r_mbytes_per_sec": 0, 00:26:29.716 "w_mbytes_per_sec": 0 00:26:29.716 }, 00:26:29.716 "claimed": false, 00:26:29.716 "zoned": false, 00:26:29.716 "supported_io_types": { 00:26:29.716 "read": true, 00:26:29.716 "write": true, 00:26:29.716 "unmap": true, 00:26:29.716 "flush": false, 00:26:29.716 "reset": true, 00:26:29.716 "nvme_admin": false, 00:26:29.716 "nvme_io": false, 00:26:29.716 "nvme_io_md": false, 00:26:29.716 "write_zeroes": true, 00:26:29.716 "zcopy": false, 00:26:29.716 "get_zone_info": false, 00:26:29.716 "zone_management": false, 00:26:29.716 "zone_append": false, 00:26:29.716 "compare": false, 00:26:29.716 "compare_and_write": false, 00:26:29.716 "abort": false, 00:26:29.716 "seek_hole": true, 00:26:29.716 "seek_data": true, 00:26:29.716 "copy": false, 00:26:29.716 "nvme_iov_md": false 00:26:29.716 }, 00:26:29.716 "driver_specific": { 00:26:29.716 "lvol": { 00:26:29.716 "lvol_store_uuid": "ec1eeb8d-64a6-40b4-a626-5f70d49df9f2", 00:26:29.716 "base_bdev": "nvme0n1", 00:26:29.716 "thin_provision": true, 00:26:29.716 "num_allocated_clusters": 0, 00:26:29.716 "snapshot": false, 00:26:29.716 "clone": false, 00:26:29.716 "esnap_clone": false 00:26:29.716 } 00:26:29.716 } 00:26:29.716 } 00:26:29.716 ]' 00:26:29.716 10:57:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:29.716 10:57:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:26:29.716 10:57:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:29.973 10:57:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:26:29.973 10:57:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:26:29.973 10:57:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:26:29.973 10:57:50 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:26:29.973 10:57:50 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:29.973 10:57:50 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:26:29.973 10:57:50 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 3752413d-0efe-4042-b2ba-d058709dbedd 00:26:29.973 10:57:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=3752413d-0efe-4042-b2ba-d058709dbedd 00:26:29.973 10:57:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:29.973 10:57:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:26:29.973 10:57:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:26:29.973 10:57:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3752413d-0efe-4042-b2ba-d058709dbedd 00:26:30.232 10:57:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:30.232 { 00:26:30.232 "name": "3752413d-0efe-4042-b2ba-d058709dbedd", 00:26:30.232 "aliases": [ 00:26:30.232 "lvs/nvme0n1p0" 00:26:30.232 ], 00:26:30.232 "product_name": "Logical Volume", 00:26:30.232 "block_size": 4096, 00:26:30.232 "num_blocks": 26476544, 00:26:30.232 "uuid": "3752413d-0efe-4042-b2ba-d058709dbedd", 00:26:30.232 "assigned_rate_limits": { 00:26:30.232 "rw_ios_per_sec": 0, 00:26:30.232 "rw_mbytes_per_sec": 0, 00:26:30.232 "r_mbytes_per_sec": 0, 00:26:30.232 "w_mbytes_per_sec": 0 00:26:30.232 }, 00:26:30.232 "claimed": false, 00:26:30.232 "zoned": false, 00:26:30.232 "supported_io_types": { 00:26:30.232 "read": true, 00:26:30.232 "write": true, 00:26:30.232 "unmap": true, 00:26:30.232 "flush": false, 00:26:30.232 "reset": true, 00:26:30.232 "nvme_admin": false, 00:26:30.232 "nvme_io": false, 00:26:30.232 "nvme_io_md": false, 00:26:30.232 "write_zeroes": true, 00:26:30.232 "zcopy": false, 00:26:30.232 "get_zone_info": false, 00:26:30.232 "zone_management": false, 00:26:30.232 "zone_append": false, 00:26:30.232 "compare": false, 00:26:30.232 "compare_and_write": false, 00:26:30.232 "abort": false, 00:26:30.232 "seek_hole": true, 00:26:30.232 "seek_data": true, 00:26:30.232 "copy": false, 00:26:30.232 "nvme_iov_md": false 00:26:30.232 }, 00:26:30.232 "driver_specific": { 00:26:30.232 "lvol": { 00:26:30.232 "lvol_store_uuid": "ec1eeb8d-64a6-40b4-a626-5f70d49df9f2", 00:26:30.232 "base_bdev": "nvme0n1", 00:26:30.232 "thin_provision": true, 00:26:30.232 "num_allocated_clusters": 0, 00:26:30.232 "snapshot": false, 00:26:30.232 "clone": false, 00:26:30.232 "esnap_clone": false 00:26:30.232 } 00:26:30.232 } 00:26:30.232 } 00:26:30.232 ]' 00:26:30.232 10:57:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:30.232 10:57:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:26:30.232 10:57:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:30.232 10:57:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:26:30.232 10:57:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:26:30.232 10:57:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:26:30.232 10:57:50 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:26:30.232 10:57:50 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 3752413d-0efe-4042-b2ba-d058709dbedd --l2p_dram_limit 10' 00:26:30.232 10:57:50 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:26:30.232 10:57:50 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:26:30.232 10:57:50 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:26:30.232 10:57:50 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:26:30.232 10:57:50 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:26:30.232 10:57:50 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3752413d-0efe-4042-b2ba-d058709dbedd --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:26:30.492 [2024-10-08 10:57:50.873294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.492 [2024-10-08 10:57:50.873335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:30.492 [2024-10-08 10:57:50.873348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:30.492 [2024-10-08 10:57:50.873358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.492 [2024-10-08 10:57:50.873403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.492 [2024-10-08 10:57:50.873411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:30.492 [2024-10-08 10:57:50.873421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:26:30.492 [2024-10-08 10:57:50.873430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.492 [2024-10-08 10:57:50.873448] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:30.492 [2024-10-08 10:57:50.873697] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:30.492 [2024-10-08 10:57:50.873710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.492 [2024-10-08 10:57:50.873718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:30.492 [2024-10-08 10:57:50.873725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:26:30.492 [2024-10-08 10:57:50.873730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.492 [2024-10-08 10:57:50.873881] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 849f357b-59b2-4ad1-93bf-cdd686aa264c 00:26:30.492 [2024-10-08 10:57:50.874811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.492 [2024-10-08 10:57:50.874835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:30.492 [2024-10-08 10:57:50.874845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:26:30.492 [2024-10-08 10:57:50.874855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.492 [2024-10-08 10:57:50.879463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.492 [2024-10-08 10:57:50.879492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:30.492 [2024-10-08 10:57:50.879499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.572 ms 00:26:30.492 [2024-10-08 10:57:50.879508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.492 [2024-10-08 10:57:50.879575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.492 [2024-10-08 10:57:50.879584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:30.492 [2024-10-08 10:57:50.879593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:26:30.493 [2024-10-08 10:57:50.879600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.493 [2024-10-08 10:57:50.879637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.493 [2024-10-08 10:57:50.879645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:30.493 [2024-10-08 10:57:50.879651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:30.493 [2024-10-08 10:57:50.879658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.493 [2024-10-08 10:57:50.879675] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:30.493 [2024-10-08 10:57:50.880918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.493 [2024-10-08 10:57:50.880942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:30.493 [2024-10-08 10:57:50.880952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.247 ms 00:26:30.493 [2024-10-08 10:57:50.880958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.493 [2024-10-08 10:57:50.880984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.493 [2024-10-08 10:57:50.880990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:30.493 [2024-10-08 10:57:50.880999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:30.493 [2024-10-08 10:57:50.881008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.493 [2024-10-08 10:57:50.881024] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:30.493 [2024-10-08 10:57:50.881128] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:30.493 [2024-10-08 10:57:50.881138] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:30.493 [2024-10-08 10:57:50.881147] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:30.493 [2024-10-08 10:57:50.881159] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:30.493 [2024-10-08 10:57:50.881166] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:30.493 [2024-10-08 10:57:50.881178] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:30.493 [2024-10-08 10:57:50.881185] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:30.493 [2024-10-08 10:57:50.881192] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:30.493 [2024-10-08 10:57:50.881199] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:30.493 [2024-10-08 10:57:50.881206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.493 [2024-10-08 10:57:50.881212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:30.493 [2024-10-08 10:57:50.881219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:26:30.493 [2024-10-08 10:57:50.881225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.493 [2024-10-08 10:57:50.881291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.493 [2024-10-08 10:57:50.881298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:30.493 [2024-10-08 10:57:50.881304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:26:30.493 [2024-10-08 10:57:50.881312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.493 [2024-10-08 10:57:50.881386] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:30.493 [2024-10-08 10:57:50.881393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:30.493 [2024-10-08 10:57:50.881401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:30.493 [2024-10-08 10:57:50.881411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:30.493 [2024-10-08 10:57:50.881418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:30.493 [2024-10-08 10:57:50.881423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:30.493 [2024-10-08 10:57:50.881429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:30.493 [2024-10-08 10:57:50.881434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:30.493 [2024-10-08 10:57:50.881441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:30.493 [2024-10-08 10:57:50.881446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:30.493 [2024-10-08 10:57:50.881452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:30.493 [2024-10-08 10:57:50.881457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:30.493 [2024-10-08 10:57:50.881465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:30.493 [2024-10-08 10:57:50.881470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:30.493 [2024-10-08 10:57:50.881476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:30.493 [2024-10-08 10:57:50.881481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:30.493 [2024-10-08 10:57:50.881487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:30.493 [2024-10-08 10:57:50.881492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:30.493 [2024-10-08 10:57:50.881499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:30.493 [2024-10-08 10:57:50.881504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:30.493 [2024-10-08 10:57:50.881510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:30.493 [2024-10-08 10:57:50.881516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:30.493 [2024-10-08 10:57:50.881523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:30.493 [2024-10-08 10:57:50.881528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:30.493 [2024-10-08 10:57:50.881534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:30.493 [2024-10-08 10:57:50.881539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:30.493 [2024-10-08 10:57:50.881545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:30.493 [2024-10-08 10:57:50.881550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:30.493 [2024-10-08 10:57:50.881559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:30.493 [2024-10-08 10:57:50.881564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:30.493 [2024-10-08 10:57:50.881571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:30.493 [2024-10-08 10:57:50.881577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:30.493 [2024-10-08 10:57:50.881584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:30.493 [2024-10-08 10:57:50.881590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:30.493 [2024-10-08 10:57:50.881597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:30.493 [2024-10-08 10:57:50.881603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:30.493 [2024-10-08 10:57:50.881610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:30.493 [2024-10-08 10:57:50.881616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:30.493 [2024-10-08 10:57:50.881623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:30.493 [2024-10-08 10:57:50.881629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:30.493 [2024-10-08 10:57:50.881636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:30.493 [2024-10-08 10:57:50.881642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:30.493 [2024-10-08 10:57:50.881649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:30.493 [2024-10-08 10:57:50.881654] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:30.493 [2024-10-08 10:57:50.881664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:30.493 [2024-10-08 10:57:50.881670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:30.493 [2024-10-08 10:57:50.881677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:30.493 [2024-10-08 10:57:50.881684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:30.493 [2024-10-08 10:57:50.881691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:30.493 [2024-10-08 10:57:50.881696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:30.493 [2024-10-08 10:57:50.881704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:30.493 [2024-10-08 10:57:50.881709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:30.493 [2024-10-08 10:57:50.881716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:30.493 [2024-10-08 10:57:50.881726] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:30.493 [2024-10-08 10:57:50.881736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:30.493 [2024-10-08 10:57:50.881746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:30.493 [2024-10-08 10:57:50.881754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:30.493 [2024-10-08 10:57:50.881760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:30.493 [2024-10-08 10:57:50.881768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:30.493 [2024-10-08 10:57:50.881774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:30.493 [2024-10-08 10:57:50.881783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:30.493 [2024-10-08 10:57:50.881789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:30.493 [2024-10-08 10:57:50.881809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:30.493 [2024-10-08 10:57:50.881815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:30.493 [2024-10-08 10:57:50.881823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:30.493 [2024-10-08 10:57:50.881842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:30.493 [2024-10-08 10:57:50.881850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:30.493 [2024-10-08 10:57:50.881856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:30.493 [2024-10-08 10:57:50.881864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:30.493 [2024-10-08 10:57:50.881870] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:30.493 [2024-10-08 10:57:50.881880] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:30.493 [2024-10-08 10:57:50.881887] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:30.494 [2024-10-08 10:57:50.881896] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:30.494 [2024-10-08 10:57:50.881902] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:30.494 [2024-10-08 10:57:50.881909] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:30.494 [2024-10-08 10:57:50.881916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:30.494 [2024-10-08 10:57:50.881926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:30.494 [2024-10-08 10:57:50.881933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.582 ms 00:26:30.494 [2024-10-08 10:57:50.881941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:30.494 [2024-10-08 10:57:50.881975] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:30.494 [2024-10-08 10:57:50.881983] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:33.033 [2024-10-08 10:57:53.545606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.033 [2024-10-08 10:57:53.545671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:33.033 [2024-10-08 10:57:53.545689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2663.619 ms 00:26:33.033 [2024-10-08 10:57:53.545705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.033 [2024-10-08 10:57:53.553899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.033 [2024-10-08 10:57:53.553939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:33.033 [2024-10-08 10:57:53.553951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.130 ms 00:26:33.033 [2024-10-08 10:57:53.553962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.033 [2024-10-08 10:57:53.554048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.033 [2024-10-08 10:57:53.554063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:33.033 [2024-10-08 10:57:53.554074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:26:33.033 [2024-10-08 10:57:53.554083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.033 [2024-10-08 10:57:53.561783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.033 [2024-10-08 10:57:53.561830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:33.033 [2024-10-08 10:57:53.561856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.663 ms 00:26:33.033 [2024-10-08 10:57:53.561872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.033 [2024-10-08 10:57:53.561904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.033 [2024-10-08 10:57:53.561917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:33.033 [2024-10-08 10:57:53.561925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:26:33.033 [2024-10-08 10:57:53.561934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.033 [2024-10-08 10:57:53.562256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.033 [2024-10-08 10:57:53.562272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:33.033 [2024-10-08 10:57:53.562280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:26:33.033 [2024-10-08 10:57:53.562291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.033 [2024-10-08 10:57:53.562388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.033 [2024-10-08 10:57:53.562398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:33.033 [2024-10-08 10:57:53.562408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:26:33.033 [2024-10-08 10:57:53.562418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.033 [2024-10-08 10:57:53.582152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.033 [2024-10-08 10:57:53.582408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:33.033 [2024-10-08 10:57:53.582445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.711 ms 00:26:33.033 [2024-10-08 10:57:53.582466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.033 [2024-10-08 10:57:53.592317] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:33.033 [2024-10-08 10:57:53.594976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.033 [2024-10-08 10:57:53.595004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:33.033 [2024-10-08 10:57:53.595017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.353 ms 00:26:33.033 [2024-10-08 10:57:53.595026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.295 [2024-10-08 10:57:53.652195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.295 [2024-10-08 10:57:53.652240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:33.295 [2024-10-08 10:57:53.652256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.140 ms 00:26:33.295 [2024-10-08 10:57:53.652267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.295 [2024-10-08 10:57:53.652444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.295 [2024-10-08 10:57:53.652454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:33.295 [2024-10-08 10:57:53.652465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:26:33.295 [2024-10-08 10:57:53.652472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.295 [2024-10-08 10:57:53.655995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.295 [2024-10-08 10:57:53.656031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:33.295 [2024-10-08 10:57:53.656042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.501 ms 00:26:33.295 [2024-10-08 10:57:53.656050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.295 [2024-10-08 10:57:53.659385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.295 [2024-10-08 10:57:53.659414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:33.295 [2024-10-08 10:57:53.659425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.297 ms 00:26:33.295 [2024-10-08 10:57:53.659432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.295 [2024-10-08 10:57:53.659722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.295 [2024-10-08 10:57:53.659731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:33.295 [2024-10-08 10:57:53.659743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:26:33.296 [2024-10-08 10:57:53.659750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.296 [2024-10-08 10:57:53.689117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.296 [2024-10-08 10:57:53.689253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:33.296 [2024-10-08 10:57:53.689273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.335 ms 00:26:33.296 [2024-10-08 10:57:53.689281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.296 [2024-10-08 10:57:53.693630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.296 [2024-10-08 10:57:53.693662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:33.296 [2024-10-08 10:57:53.693674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.290 ms 00:26:33.296 [2024-10-08 10:57:53.693682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.296 [2024-10-08 10:57:53.697466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.296 [2024-10-08 10:57:53.697496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:33.296 [2024-10-08 10:57:53.697506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.749 ms 00:26:33.296 [2024-10-08 10:57:53.697513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.296 [2024-10-08 10:57:53.701969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.296 [2024-10-08 10:57:53.701995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:33.296 [2024-10-08 10:57:53.702008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.421 ms 00:26:33.296 [2024-10-08 10:57:53.702015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.296 [2024-10-08 10:57:53.702051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.296 [2024-10-08 10:57:53.702060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:33.296 [2024-10-08 10:57:53.702070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:33.296 [2024-10-08 10:57:53.702077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.296 [2024-10-08 10:57:53.702150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.296 [2024-10-08 10:57:53.702159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:33.296 [2024-10-08 10:57:53.702168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:26:33.296 [2024-10-08 10:57:53.702176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.296 [2024-10-08 10:57:53.703019] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2829.295 ms, result 0 00:26:33.296 { 00:26:33.296 "name": "ftl0", 00:26:33.296 "uuid": "849f357b-59b2-4ad1-93bf-cdd686aa264c" 00:26:33.296 } 00:26:33.296 10:57:53 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:26:33.296 10:57:53 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:33.557 10:57:54 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:26:33.557 10:57:54 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:33.843 [2024-10-08 10:57:54.194786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.843 [2024-10-08 10:57:54.194843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:33.843 [2024-10-08 10:57:54.194856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:33.843 [2024-10-08 10:57:54.194867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.843 [2024-10-08 10:57:54.194892] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:33.843 [2024-10-08 10:57:54.195340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.843 [2024-10-08 10:57:54.195357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:33.843 [2024-10-08 10:57:54.195367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:26:33.843 [2024-10-08 10:57:54.195375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.843 [2024-10-08 10:57:54.195626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.843 [2024-10-08 10:57:54.195636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:33.843 [2024-10-08 10:57:54.195647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:26:33.843 [2024-10-08 10:57:54.195659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.843 [2024-10-08 10:57:54.198929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.843 [2024-10-08 10:57:54.199052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:33.843 [2024-10-08 10:57:54.199071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.252 ms 00:26:33.843 [2024-10-08 10:57:54.199078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.843 [2024-10-08 10:57:54.205228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.843 [2024-10-08 10:57:54.205252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:33.843 [2024-10-08 10:57:54.205265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.125 ms 00:26:33.843 [2024-10-08 10:57:54.205271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.843 [2024-10-08 10:57:54.207470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.843 [2024-10-08 10:57:54.207501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:33.843 [2024-10-08 10:57:54.207512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.125 ms 00:26:33.843 [2024-10-08 10:57:54.207519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.843 [2024-10-08 10:57:54.212642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.843 [2024-10-08 10:57:54.212673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:33.843 [2024-10-08 10:57:54.212684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.084 ms 00:26:33.843 [2024-10-08 10:57:54.212692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.843 [2024-10-08 10:57:54.212820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.843 [2024-10-08 10:57:54.212834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:33.843 [2024-10-08 10:57:54.212844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:26:33.843 [2024-10-08 10:57:54.212851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.843 [2024-10-08 10:57:54.215128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.843 [2024-10-08 10:57:54.215156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:33.843 [2024-10-08 10:57:54.215166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.251 ms 00:26:33.843 [2024-10-08 10:57:54.215173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.843 [2024-10-08 10:57:54.217194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.843 [2024-10-08 10:57:54.217221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:33.843 [2024-10-08 10:57:54.217231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.986 ms 00:26:33.843 [2024-10-08 10:57:54.217238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.843 [2024-10-08 10:57:54.219007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.843 [2024-10-08 10:57:54.219035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:33.843 [2024-10-08 10:57:54.219046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.736 ms 00:26:33.843 [2024-10-08 10:57:54.219053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.843 [2024-10-08 10:57:54.220754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.843 [2024-10-08 10:57:54.220784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:33.843 [2024-10-08 10:57:54.220809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.643 ms 00:26:33.843 [2024-10-08 10:57:54.220816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.843 [2024-10-08 10:57:54.220847] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:33.843 [2024-10-08 10:57:54.220860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.220871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.220878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.220889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.220897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.220905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.220913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.220922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.220929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.220938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.220945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.220954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.220961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.220970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.220977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.220986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.220993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.221002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.221009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.221020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.221027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.221037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.221044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.221053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.221060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.221069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.221077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.221086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.221093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.221102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.221110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.221119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.221127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.221135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.221142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.221153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.221160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.221169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.221176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.221185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.221192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.221200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.221208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.221216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.221223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.221231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.221238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:33.843 [2024-10-08 10:57:54.221247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:33.844 [2024-10-08 10:57:54.221691] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:33.844 [2024-10-08 10:57:54.221700] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 849f357b-59b2-4ad1-93bf-cdd686aa264c 00:26:33.844 [2024-10-08 10:57:54.221708] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:33.844 [2024-10-08 10:57:54.221716] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:33.844 [2024-10-08 10:57:54.221724] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:33.844 [2024-10-08 10:57:54.221732] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:33.844 [2024-10-08 10:57:54.221739] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:33.844 [2024-10-08 10:57:54.221747] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:33.844 [2024-10-08 10:57:54.221754] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:33.844 [2024-10-08 10:57:54.221762] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:33.844 [2024-10-08 10:57:54.221768] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:33.844 [2024-10-08 10:57:54.221777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.844 [2024-10-08 10:57:54.221786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:33.844 [2024-10-08 10:57:54.221805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.932 ms 00:26:33.844 [2024-10-08 10:57:54.221813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.844 [2024-10-08 10:57:54.223231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.844 [2024-10-08 10:57:54.223251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:33.844 [2024-10-08 10:57:54.223261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.396 ms 00:26:33.844 [2024-10-08 10:57:54.223273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.844 [2024-10-08 10:57:54.223363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.844 [2024-10-08 10:57:54.223372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:33.844 [2024-10-08 10:57:54.223381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:26:33.844 [2024-10-08 10:57:54.223391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.844 [2024-10-08 10:57:54.228422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.844 [2024-10-08 10:57:54.228450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:33.844 [2024-10-08 10:57:54.228461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.844 [2024-10-08 10:57:54.228468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.844 [2024-10-08 10:57:54.228522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.844 [2024-10-08 10:57:54.228530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:33.844 [2024-10-08 10:57:54.228539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.844 [2024-10-08 10:57:54.228546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.844 [2024-10-08 10:57:54.228595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.844 [2024-10-08 10:57:54.228604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:33.844 [2024-10-08 10:57:54.228613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.844 [2024-10-08 10:57:54.228620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.844 [2024-10-08 10:57:54.228638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.844 [2024-10-08 10:57:54.228647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:33.844 [2024-10-08 10:57:54.228655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.844 [2024-10-08 10:57:54.228662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.844 [2024-10-08 10:57:54.236988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.844 [2024-10-08 10:57:54.237123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:33.844 [2024-10-08 10:57:54.237141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.844 [2024-10-08 10:57:54.237149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.844 [2024-10-08 10:57:54.244442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.844 [2024-10-08 10:57:54.244475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:33.844 [2024-10-08 10:57:54.244486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.844 [2024-10-08 10:57:54.244496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.844 [2024-10-08 10:57:54.244560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.844 [2024-10-08 10:57:54.244570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:33.844 [2024-10-08 10:57:54.244580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.844 [2024-10-08 10:57:54.244587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.844 [2024-10-08 10:57:54.244637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.844 [2024-10-08 10:57:54.244650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:33.845 [2024-10-08 10:57:54.244662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.845 [2024-10-08 10:57:54.244669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.845 [2024-10-08 10:57:54.244732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.845 [2024-10-08 10:57:54.244741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:33.845 [2024-10-08 10:57:54.244750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.845 [2024-10-08 10:57:54.244757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.845 [2024-10-08 10:57:54.244787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.845 [2024-10-08 10:57:54.244812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:33.845 [2024-10-08 10:57:54.244822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.845 [2024-10-08 10:57:54.244832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.845 [2024-10-08 10:57:54.244868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.845 [2024-10-08 10:57:54.244877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:33.845 [2024-10-08 10:57:54.244890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.845 [2024-10-08 10:57:54.244897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.845 [2024-10-08 10:57:54.244939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.845 [2024-10-08 10:57:54.244948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:33.845 [2024-10-08 10:57:54.244960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.845 [2024-10-08 10:57:54.244967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.845 [2024-10-08 10:57:54.245091] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 50.274 ms, result 0 00:26:33.845 true 00:26:33.845 10:57:54 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 93779 00:26:33.845 10:57:54 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 93779 ']' 00:26:33.845 10:57:54 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 93779 00:26:33.845 10:57:54 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # uname 00:26:33.845 10:57:54 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:33.845 10:57:54 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93779 00:26:33.845 killing process with pid 93779 00:26:33.845 10:57:54 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:33.845 10:57:54 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:33.845 10:57:54 ftl.ftl_restore_fast -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93779' 00:26:33.845 10:57:54 ftl.ftl_restore_fast -- common/autotest_common.sh@969 -- # kill 93779 00:26:33.845 10:57:54 ftl.ftl_restore_fast -- common/autotest_common.sh@974 -- # wait 93779 00:26:39.133 10:57:58 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:26:43.359 262144+0 records in 00:26:43.359 262144+0 records out 00:26:43.359 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.25851 s, 252 MB/s 00:26:43.359 10:58:03 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:44.742 10:58:05 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:44.742 [2024-10-08 10:58:05.218850] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:26:44.742 [2024-10-08 10:58:05.218945] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93982 ] 00:26:45.001 [2024-10-08 10:58:05.340679] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:45.001 [2024-10-08 10:58:05.363097] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.001 [2024-10-08 10:58:05.393059] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.001 [2024-10-08 10:58:05.475374] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:45.001 [2024-10-08 10:58:05.475428] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:45.263 [2024-10-08 10:58:05.618173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.263 [2024-10-08 10:58:05.618471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:45.263 [2024-10-08 10:58:05.618521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:45.263 [2024-10-08 10:58:05.618537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.263 [2024-10-08 10:58:05.618622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.263 [2024-10-08 10:58:05.618634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:45.263 [2024-10-08 10:58:05.618642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:26:45.263 [2024-10-08 10:58:05.618653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.263 [2024-10-08 10:58:05.618678] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:45.264 [2024-10-08 10:58:05.618936] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:45.264 [2024-10-08 10:58:05.618951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.264 [2024-10-08 10:58:05.618964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:45.264 [2024-10-08 10:58:05.618973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:26:45.264 [2024-10-08 10:58:05.618984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.264 [2024-10-08 10:58:05.620027] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:45.264 [2024-10-08 10:58:05.622564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.264 [2024-10-08 10:58:05.622598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:45.264 [2024-10-08 10:58:05.622613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.539 ms 00:26:45.264 [2024-10-08 10:58:05.622621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.264 [2024-10-08 10:58:05.622675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.264 [2024-10-08 10:58:05.622684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:45.264 [2024-10-08 10:58:05.622692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:26:45.264 [2024-10-08 10:58:05.622701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.264 [2024-10-08 10:58:05.627345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.264 [2024-10-08 10:58:05.627379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:45.264 [2024-10-08 10:58:05.627389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.595 ms 00:26:45.264 [2024-10-08 10:58:05.627398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.264 [2024-10-08 10:58:05.627466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.264 [2024-10-08 10:58:05.627478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:45.264 [2024-10-08 10:58:05.627486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:26:45.264 [2024-10-08 10:58:05.627493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.264 [2024-10-08 10:58:05.627544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.264 [2024-10-08 10:58:05.627558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:45.264 [2024-10-08 10:58:05.627566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:45.264 [2024-10-08 10:58:05.627573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.264 [2024-10-08 10:58:05.627596] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:45.264 [2024-10-08 10:58:05.628886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.264 [2024-10-08 10:58:05.628939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:45.264 [2024-10-08 10:58:05.628950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.296 ms 00:26:45.264 [2024-10-08 10:58:05.628958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.264 [2024-10-08 10:58:05.628991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.264 [2024-10-08 10:58:05.628999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:45.264 [2024-10-08 10:58:05.629011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:45.264 [2024-10-08 10:58:05.629017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.264 [2024-10-08 10:58:05.629040] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:45.264 [2024-10-08 10:58:05.629059] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:45.264 [2024-10-08 10:58:05.629096] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:45.264 [2024-10-08 10:58:05.629117] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:45.264 [2024-10-08 10:58:05.629220] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:45.264 [2024-10-08 10:58:05.629230] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:45.264 [2024-10-08 10:58:05.629240] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:45.264 [2024-10-08 10:58:05.629252] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:45.264 [2024-10-08 10:58:05.629261] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:45.264 [2024-10-08 10:58:05.629270] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:45.264 [2024-10-08 10:58:05.629277] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:45.264 [2024-10-08 10:58:05.629284] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:45.264 [2024-10-08 10:58:05.629291] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:45.264 [2024-10-08 10:58:05.629298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.264 [2024-10-08 10:58:05.629306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:45.264 [2024-10-08 10:58:05.629313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:26:45.264 [2024-10-08 10:58:05.629320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.264 [2024-10-08 10:58:05.629404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.264 [2024-10-08 10:58:05.629414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:45.264 [2024-10-08 10:58:05.629422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:45.264 [2024-10-08 10:58:05.629429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.264 [2024-10-08 10:58:05.629524] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:45.264 [2024-10-08 10:58:05.629534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:45.264 [2024-10-08 10:58:05.629542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:45.264 [2024-10-08 10:58:05.629550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:45.264 [2024-10-08 10:58:05.629557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:45.264 [2024-10-08 10:58:05.629563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:45.264 [2024-10-08 10:58:05.629570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:45.264 [2024-10-08 10:58:05.629577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:45.264 [2024-10-08 10:58:05.629591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:45.264 [2024-10-08 10:58:05.629597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:45.264 [2024-10-08 10:58:05.629606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:45.264 [2024-10-08 10:58:05.629613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:45.264 [2024-10-08 10:58:05.629622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:45.264 [2024-10-08 10:58:05.629629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:45.264 [2024-10-08 10:58:05.629635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:45.264 [2024-10-08 10:58:05.629642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:45.264 [2024-10-08 10:58:05.629649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:45.264 [2024-10-08 10:58:05.629655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:45.264 [2024-10-08 10:58:05.629662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:45.264 [2024-10-08 10:58:05.629669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:45.264 [2024-10-08 10:58:05.629675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:45.264 [2024-10-08 10:58:05.629683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:45.264 [2024-10-08 10:58:05.629689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:45.264 [2024-10-08 10:58:05.629696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:45.264 [2024-10-08 10:58:05.629702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:45.264 [2024-10-08 10:58:05.629709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:45.264 [2024-10-08 10:58:05.629715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:45.264 [2024-10-08 10:58:05.629721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:45.264 [2024-10-08 10:58:05.629731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:45.264 [2024-10-08 10:58:05.629738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:45.264 [2024-10-08 10:58:05.629745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:45.264 [2024-10-08 10:58:05.629751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:45.264 [2024-10-08 10:58:05.629757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:45.264 [2024-10-08 10:58:05.629764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:45.264 [2024-10-08 10:58:05.629770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:45.264 [2024-10-08 10:58:05.629777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:45.265 [2024-10-08 10:58:05.629783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:45.265 [2024-10-08 10:58:05.629791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:45.265 [2024-10-08 10:58:05.629809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:45.265 [2024-10-08 10:58:05.629816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:45.265 [2024-10-08 10:58:05.629822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:45.265 [2024-10-08 10:58:05.629829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:45.265 [2024-10-08 10:58:05.629836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:45.265 [2024-10-08 10:58:05.629843] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:45.265 [2024-10-08 10:58:05.629872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:45.265 [2024-10-08 10:58:05.629883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:45.265 [2024-10-08 10:58:05.629890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:45.265 [2024-10-08 10:58:05.629898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:45.265 [2024-10-08 10:58:05.629906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:45.265 [2024-10-08 10:58:05.629912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:45.265 [2024-10-08 10:58:05.629920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:45.265 [2024-10-08 10:58:05.629927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:45.265 [2024-10-08 10:58:05.629933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:45.265 [2024-10-08 10:58:05.629944] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:45.265 [2024-10-08 10:58:05.629957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:45.265 [2024-10-08 10:58:05.629969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:45.265 [2024-10-08 10:58:05.629977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:45.265 [2024-10-08 10:58:05.629984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:45.265 [2024-10-08 10:58:05.629990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:45.265 [2024-10-08 10:58:05.629997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:45.265 [2024-10-08 10:58:05.630006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:45.265 [2024-10-08 10:58:05.630013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:45.265 [2024-10-08 10:58:05.630021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:45.265 [2024-10-08 10:58:05.630027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:45.265 [2024-10-08 10:58:05.630034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:45.265 [2024-10-08 10:58:05.630041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:45.265 [2024-10-08 10:58:05.630048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:45.265 [2024-10-08 10:58:05.630054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:45.265 [2024-10-08 10:58:05.630062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:45.265 [2024-10-08 10:58:05.630069] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:45.265 [2024-10-08 10:58:05.630080] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:45.265 [2024-10-08 10:58:05.630088] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:45.265 [2024-10-08 10:58:05.630096] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:45.265 [2024-10-08 10:58:05.630102] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:45.265 [2024-10-08 10:58:05.630110] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:45.265 [2024-10-08 10:58:05.630117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.265 [2024-10-08 10:58:05.630127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:45.265 [2024-10-08 10:58:05.630137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.661 ms 00:26:45.265 [2024-10-08 10:58:05.630145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.265 [2024-10-08 10:58:05.650011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.265 [2024-10-08 10:58:05.650160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:45.265 [2024-10-08 10:58:05.650236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.821 ms 00:26:45.265 [2024-10-08 10:58:05.650263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.265 [2024-10-08 10:58:05.650376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.265 [2024-10-08 10:58:05.650402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:45.265 [2024-10-08 10:58:05.650424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:26:45.265 [2024-10-08 10:58:05.650481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.265 [2024-10-08 10:58:05.658647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.265 [2024-10-08 10:58:05.658765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:45.265 [2024-10-08 10:58:05.658842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.071 ms 00:26:45.265 [2024-10-08 10:58:05.658869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.265 [2024-10-08 10:58:05.658918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.265 [2024-10-08 10:58:05.658943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:45.265 [2024-10-08 10:58:05.658965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:45.265 [2024-10-08 10:58:05.658986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.265 [2024-10-08 10:58:05.659324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.265 [2024-10-08 10:58:05.659422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:45.265 [2024-10-08 10:58:05.659479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:26:45.265 [2024-10-08 10:58:05.659505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.265 [2024-10-08 10:58:05.659748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.265 [2024-10-08 10:58:05.659835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:45.265 [2024-10-08 10:58:05.659944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:26:45.265 [2024-10-08 10:58:05.659976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.265 [2024-10-08 10:58:05.664610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.265 [2024-10-08 10:58:05.664714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:45.265 [2024-10-08 10:58:05.664766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.564 ms 00:26:45.265 [2024-10-08 10:58:05.664820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.265 [2024-10-08 10:58:05.667477] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:45.265 [2024-10-08 10:58:05.667588] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:45.265 [2024-10-08 10:58:05.667648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.265 [2024-10-08 10:58:05.667670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:45.265 [2024-10-08 10:58:05.667689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.698 ms 00:26:45.265 [2024-10-08 10:58:05.667714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.265 [2024-10-08 10:58:05.682199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.265 [2024-10-08 10:58:05.682301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:45.265 [2024-10-08 10:58:05.682354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.424 ms 00:26:45.265 [2024-10-08 10:58:05.682381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.265 [2024-10-08 10:58:05.684566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.265 [2024-10-08 10:58:05.684685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:45.265 [2024-10-08 10:58:05.684738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.858 ms 00:26:45.265 [2024-10-08 10:58:05.684761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.265 [2024-10-08 10:58:05.686891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.265 [2024-10-08 10:58:05.687006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:45.265 [2024-10-08 10:58:05.687058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.807 ms 00:26:45.265 [2024-10-08 10:58:05.687081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.265 [2024-10-08 10:58:05.687680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.265 [2024-10-08 10:58:05.687754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:45.265 [2024-10-08 10:58:05.687852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:26:45.265 [2024-10-08 10:58:05.687879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.265 [2024-10-08 10:58:05.703046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.265 [2024-10-08 10:58:05.703211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:45.265 [2024-10-08 10:58:05.703276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.135 ms 00:26:45.265 [2024-10-08 10:58:05.703301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.265 [2024-10-08 10:58:05.710979] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:45.265 [2024-10-08 10:58:05.713722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.265 [2024-10-08 10:58:05.713831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:45.265 [2024-10-08 10:58:05.713903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.329 ms 00:26:45.266 [2024-10-08 10:58:05.713937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.266 [2024-10-08 10:58:05.714032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.266 [2024-10-08 10:58:05.714201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:45.266 [2024-10-08 10:58:05.714252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:45.266 [2024-10-08 10:58:05.714274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.266 [2024-10-08 10:58:05.714378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.266 [2024-10-08 10:58:05.714409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:45.266 [2024-10-08 10:58:05.714475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:26:45.266 [2024-10-08 10:58:05.714497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.266 [2024-10-08 10:58:05.714595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.266 [2024-10-08 10:58:05.714646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:45.266 [2024-10-08 10:58:05.714669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:45.266 [2024-10-08 10:58:05.714688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.266 [2024-10-08 10:58:05.714762] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:45.266 [2024-10-08 10:58:05.714793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.266 [2024-10-08 10:58:05.714853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:45.266 [2024-10-08 10:58:05.714927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:26:45.266 [2024-10-08 10:58:05.714978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.266 [2024-10-08 10:58:05.718065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.266 [2024-10-08 10:58:05.718171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:45.266 [2024-10-08 10:58:05.718219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.049 ms 00:26:45.266 [2024-10-08 10:58:05.718243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.266 [2024-10-08 10:58:05.718324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:45.266 [2024-10-08 10:58:05.718353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:45.266 [2024-10-08 10:58:05.718377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:26:45.266 [2024-10-08 10:58:05.718435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:45.266 [2024-10-08 10:58:05.719508] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 100.885 ms, result 0 00:26:46.205  [2024-10-08T10:58:08.154Z] Copying: 27/1024 [MB] (27 MBps) [2024-10-08T10:58:09.085Z] Copying: 73/1024 [MB] (46 MBps) [2024-10-08T10:58:10.019Z] Copying: 123/1024 [MB] (50 MBps) [2024-10-08T10:58:10.957Z] Copying: 177/1024 [MB] (53 MBps) [2024-10-08T10:58:11.898Z] Copying: 202/1024 [MB] (25 MBps) [2024-10-08T10:58:12.843Z] Copying: 225/1024 [MB] (23 MBps) [2024-10-08T10:58:13.788Z] Copying: 246/1024 [MB] (21 MBps) [2024-10-08T10:58:14.734Z] Copying: 260/1024 [MB] (13 MBps) [2024-10-08T10:58:16.151Z] Copying: 275/1024 [MB] (15 MBps) [2024-10-08T10:58:17.091Z] Copying: 292/1024 [MB] (17 MBps) [2024-10-08T10:58:18.032Z] Copying: 317/1024 [MB] (24 MBps) [2024-10-08T10:58:18.971Z] Copying: 338/1024 [MB] (21 MBps) [2024-10-08T10:58:19.914Z] Copying: 358/1024 [MB] (20 MBps) [2024-10-08T10:58:20.854Z] Copying: 378/1024 [MB] (19 MBps) [2024-10-08T10:58:21.798Z] Copying: 397/1024 [MB] (18 MBps) [2024-10-08T10:58:22.742Z] Copying: 421/1024 [MB] (23 MBps) [2024-10-08T10:58:24.127Z] Copying: 439/1024 [MB] (18 MBps) [2024-10-08T10:58:25.068Z] Copying: 458/1024 [MB] (18 MBps) [2024-10-08T10:58:26.013Z] Copying: 482/1024 [MB] (24 MBps) [2024-10-08T10:58:26.956Z] Copying: 508/1024 [MB] (26 MBps) [2024-10-08T10:58:27.898Z] Copying: 525/1024 [MB] (17 MBps) [2024-10-08T10:58:28.853Z] Copying: 545/1024 [MB] (19 MBps) [2024-10-08T10:58:29.795Z] Copying: 557/1024 [MB] (12 MBps) [2024-10-08T10:58:30.740Z] Copying: 572/1024 [MB] (15 MBps) [2024-10-08T10:58:32.125Z] Copying: 583/1024 [MB] (11 MBps) [2024-10-08T10:58:33.069Z] Copying: 601/1024 [MB] (17 MBps) [2024-10-08T10:58:34.013Z] Copying: 613/1024 [MB] (11 MBps) [2024-10-08T10:58:34.956Z] Copying: 627/1024 [MB] (14 MBps) [2024-10-08T10:58:35.896Z] Copying: 644/1024 [MB] (17 MBps) [2024-10-08T10:58:36.828Z] Copying: 661/1024 [MB] (16 MBps) [2024-10-08T10:58:37.761Z] Copying: 701/1024 [MB] (40 MBps) [2024-10-08T10:58:39.133Z] Copying: 747/1024 [MB] (45 MBps) [2024-10-08T10:58:40.065Z] Copying: 793/1024 [MB] (46 MBps) [2024-10-08T10:58:41.003Z] Copying: 839/1024 [MB] (45 MBps) [2024-10-08T10:58:41.944Z] Copying: 867/1024 [MB] (27 MBps) [2024-10-08T10:58:42.879Z] Copying: 886/1024 [MB] (18 MBps) [2024-10-08T10:58:43.820Z] Copying: 927/1024 [MB] (41 MBps) [2024-10-08T10:58:44.759Z] Copying: 953/1024 [MB] (26 MBps) [2024-10-08T10:58:46.140Z] Copying: 977/1024 [MB] (23 MBps) [2024-10-08T10:58:46.715Z] Copying: 1004/1024 [MB] (27 MBps) [2024-10-08T10:58:46.715Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-10-08 10:58:46.605386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.138 [2024-10-08 10:58:46.605499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:26.138 [2024-10-08 10:58:46.605556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:26.138 [2024-10-08 10:58:46.605581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.138 [2024-10-08 10:58:46.605616] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:26.138 [2024-10-08 10:58:46.606115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.138 [2024-10-08 10:58:46.606158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:26.138 [2024-10-08 10:58:46.606179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.456 ms 00:27:26.138 [2024-10-08 10:58:46.606197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.138 [2024-10-08 10:58:46.609181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.138 [2024-10-08 10:58:46.609279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:26.138 [2024-10-08 10:58:46.609329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.638 ms 00:27:26.138 [2024-10-08 10:58:46.609351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.138 [2024-10-08 10:58:46.609391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.138 [2024-10-08 10:58:46.609417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:27:26.138 [2024-10-08 10:58:46.609437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:26.138 [2024-10-08 10:58:46.609456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.138 [2024-10-08 10:58:46.609508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.138 [2024-10-08 10:58:46.609534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:27:26.138 [2024-10-08 10:58:46.609554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:26.138 [2024-10-08 10:58:46.609615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.138 [2024-10-08 10:58:46.609644] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:26.138 [2024-10-08 10:58:46.609668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:26.138 [2024-10-08 10:58:46.609708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:26.138 [2024-10-08 10:58:46.609737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:26.138 [2024-10-08 10:58:46.609766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:26.138 [2024-10-08 10:58:46.609804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:26.138 [2024-10-08 10:58:46.609865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:26.138 [2024-10-08 10:58:46.609896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:26.138 [2024-10-08 10:58:46.609935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:26.138 [2024-10-08 10:58:46.609964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:26.138 [2024-10-08 10:58:46.609993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.610055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.610085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.610113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.610142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.610197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.610742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.610830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.610886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.610919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:26.139 [2024-10-08 10:58:46.611938] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:26.139 [2024-10-08 10:58:46.611949] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 849f357b-59b2-4ad1-93bf-cdd686aa264c 00:27:26.139 [2024-10-08 10:58:46.611956] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:26.139 [2024-10-08 10:58:46.611963] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:27:26.139 [2024-10-08 10:58:46.611970] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:26.139 [2024-10-08 10:58:46.611977] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:26.139 [2024-10-08 10:58:46.611983] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:26.140 [2024-10-08 10:58:46.611991] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:26.140 [2024-10-08 10:58:46.611998] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:26.140 [2024-10-08 10:58:46.612005] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:26.140 [2024-10-08 10:58:46.612011] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:26.140 [2024-10-08 10:58:46.612019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.140 [2024-10-08 10:58:46.612026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:26.140 [2024-10-08 10:58:46.612034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.375 ms 00:27:26.140 [2024-10-08 10:58:46.612044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.140 [2024-10-08 10:58:46.613405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.140 [2024-10-08 10:58:46.613427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:26.140 [2024-10-08 10:58:46.613437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.342 ms 00:27:26.140 [2024-10-08 10:58:46.613444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.140 [2024-10-08 10:58:46.613521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.140 [2024-10-08 10:58:46.613528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:26.140 [2024-10-08 10:58:46.613536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:27:26.140 [2024-10-08 10:58:46.613545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.140 [2024-10-08 10:58:46.617845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:26.140 [2024-10-08 10:58:46.617954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:26.140 [2024-10-08 10:58:46.618003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:26.140 [2024-10-08 10:58:46.618026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.140 [2024-10-08 10:58:46.618098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:26.140 [2024-10-08 10:58:46.618119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:26.140 [2024-10-08 10:58:46.618138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:26.140 [2024-10-08 10:58:46.618159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.140 [2024-10-08 10:58:46.618235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:26.140 [2024-10-08 10:58:46.618260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:26.140 [2024-10-08 10:58:46.618285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:26.140 [2024-10-08 10:58:46.618337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.140 [2024-10-08 10:58:46.618367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:26.140 [2024-10-08 10:58:46.618437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:26.140 [2024-10-08 10:58:46.618481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:26.140 [2024-10-08 10:58:46.618491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.140 [2024-10-08 10:58:46.626984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:26.140 [2024-10-08 10:58:46.627021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:26.140 [2024-10-08 10:58:46.627032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:26.140 [2024-10-08 10:58:46.627040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.140 [2024-10-08 10:58:46.633598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:26.140 [2024-10-08 10:58:46.633716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:26.140 [2024-10-08 10:58:46.633730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:26.140 [2024-10-08 10:58:46.633743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.140 [2024-10-08 10:58:46.633766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:26.140 [2024-10-08 10:58:46.633774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:26.140 [2024-10-08 10:58:46.633789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:26.140 [2024-10-08 10:58:46.633921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.140 [2024-10-08 10:58:46.633965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:26.140 [2024-10-08 10:58:46.633973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:26.140 [2024-10-08 10:58:46.633981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:26.140 [2024-10-08 10:58:46.633988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.140 [2024-10-08 10:58:46.634039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:26.140 [2024-10-08 10:58:46.634048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:26.140 [2024-10-08 10:58:46.634056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:26.140 [2024-10-08 10:58:46.634063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.140 [2024-10-08 10:58:46.634088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:26.140 [2024-10-08 10:58:46.634096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:26.140 [2024-10-08 10:58:46.634104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:26.140 [2024-10-08 10:58:46.634112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.140 [2024-10-08 10:58:46.634150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:26.140 [2024-10-08 10:58:46.634163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:26.140 [2024-10-08 10:58:46.634171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:26.140 [2024-10-08 10:58:46.634178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.140 [2024-10-08 10:58:46.634215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:26.140 [2024-10-08 10:58:46.634224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:26.140 [2024-10-08 10:58:46.634232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:26.140 [2024-10-08 10:58:46.634243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.140 [2024-10-08 10:58:46.634350] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 28.936 ms, result 0 00:27:26.712 00:27:26.712 00:27:26.712 10:58:47 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:27:26.712 [2024-10-08 10:58:47.080095] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:27:26.712 [2024-10-08 10:58:47.080353] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94412 ] 00:27:26.712 [2024-10-08 10:58:47.208786] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:26.712 [2024-10-08 10:58:47.230144] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.712 [2024-10-08 10:58:47.263090] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.974 [2024-10-08 10:58:47.349201] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:26.974 [2024-10-08 10:58:47.349264] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:26.974 [2024-10-08 10:58:47.506502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.974 [2024-10-08 10:58:47.506683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:26.974 [2024-10-08 10:58:47.506710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:26.974 [2024-10-08 10:58:47.506719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.974 [2024-10-08 10:58:47.506767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.974 [2024-10-08 10:58:47.506777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:26.974 [2024-10-08 10:58:47.506789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:27:26.974 [2024-10-08 10:58:47.506816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.974 [2024-10-08 10:58:47.506837] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:26.974 [2024-10-08 10:58:47.507051] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:26.974 [2024-10-08 10:58:47.507065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.974 [2024-10-08 10:58:47.507074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:26.974 [2024-10-08 10:58:47.507084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:27:26.975 [2024-10-08 10:58:47.507093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.975 [2024-10-08 10:58:47.507319] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:27:26.975 [2024-10-08 10:58:47.507341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.975 [2024-10-08 10:58:47.507349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:26.975 [2024-10-08 10:58:47.507359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:27:26.975 [2024-10-08 10:58:47.507367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.975 [2024-10-08 10:58:47.507414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.975 [2024-10-08 10:58:47.507426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:26.975 [2024-10-08 10:58:47.507435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:27:26.975 [2024-10-08 10:58:47.507443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.975 [2024-10-08 10:58:47.507710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.975 [2024-10-08 10:58:47.507722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:26.975 [2024-10-08 10:58:47.507734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:27:26.975 [2024-10-08 10:58:47.507744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.975 [2024-10-08 10:58:47.507818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.975 [2024-10-08 10:58:47.507831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:26.975 [2024-10-08 10:58:47.507838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:27:26.975 [2024-10-08 10:58:47.507845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.975 [2024-10-08 10:58:47.507868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.975 [2024-10-08 10:58:47.507876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:26.975 [2024-10-08 10:58:47.507884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:26.975 [2024-10-08 10:58:47.507891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.975 [2024-10-08 10:58:47.507909] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:26.975 [2024-10-08 10:58:47.509315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.975 [2024-10-08 10:58:47.509346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:26.975 [2024-10-08 10:58:47.509355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.409 ms 00:27:26.975 [2024-10-08 10:58:47.509366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.975 [2024-10-08 10:58:47.509393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.975 [2024-10-08 10:58:47.509400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:26.975 [2024-10-08 10:58:47.509408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:26.975 [2024-10-08 10:58:47.509415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.975 [2024-10-08 10:58:47.509433] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:26.975 [2024-10-08 10:58:47.509453] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:26.975 [2024-10-08 10:58:47.509488] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:26.975 [2024-10-08 10:58:47.509505] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:26.975 [2024-10-08 10:58:47.509606] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:26.975 [2024-10-08 10:58:47.509615] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:26.975 [2024-10-08 10:58:47.509625] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:26.975 [2024-10-08 10:58:47.509634] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:26.975 [2024-10-08 10:58:47.509644] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:26.975 [2024-10-08 10:58:47.509656] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:26.975 [2024-10-08 10:58:47.509663] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:26.975 [2024-10-08 10:58:47.509674] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:26.975 [2024-10-08 10:58:47.509681] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:26.975 [2024-10-08 10:58:47.509694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.975 [2024-10-08 10:58:47.509701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:26.975 [2024-10-08 10:58:47.509712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:27:26.975 [2024-10-08 10:58:47.509719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.975 [2024-10-08 10:58:47.509811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.975 [2024-10-08 10:58:47.509824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:26.975 [2024-10-08 10:58:47.509831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:27:26.975 [2024-10-08 10:58:47.509841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.975 [2024-10-08 10:58:47.509966] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:26.975 [2024-10-08 10:58:47.509982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:26.975 [2024-10-08 10:58:47.509994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:26.975 [2024-10-08 10:58:47.510007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:26.975 [2024-10-08 10:58:47.510016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:26.975 [2024-10-08 10:58:47.510024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:26.975 [2024-10-08 10:58:47.510032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:26.975 [2024-10-08 10:58:47.510040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:26.975 [2024-10-08 10:58:47.510048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:26.975 [2024-10-08 10:58:47.510056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:26.975 [2024-10-08 10:58:47.510068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:26.975 [2024-10-08 10:58:47.510076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:26.975 [2024-10-08 10:58:47.510084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:26.975 [2024-10-08 10:58:47.510091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:26.975 [2024-10-08 10:58:47.510099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:26.975 [2024-10-08 10:58:47.510106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:26.975 [2024-10-08 10:58:47.510113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:26.975 [2024-10-08 10:58:47.510121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:26.975 [2024-10-08 10:58:47.510128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:26.975 [2024-10-08 10:58:47.510138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:26.975 [2024-10-08 10:58:47.510145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:26.975 [2024-10-08 10:58:47.510153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:26.975 [2024-10-08 10:58:47.510160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:26.975 [2024-10-08 10:58:47.510167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:26.975 [2024-10-08 10:58:47.510174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:26.975 [2024-10-08 10:58:47.510182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:26.975 [2024-10-08 10:58:47.510189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:26.975 [2024-10-08 10:58:47.510196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:26.975 [2024-10-08 10:58:47.510204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:26.975 [2024-10-08 10:58:47.510211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:26.975 [2024-10-08 10:58:47.510218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:26.975 [2024-10-08 10:58:47.510225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:26.975 [2024-10-08 10:58:47.510232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:26.975 [2024-10-08 10:58:47.510240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:26.975 [2024-10-08 10:58:47.510248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:26.975 [2024-10-08 10:58:47.510260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:26.975 [2024-10-08 10:58:47.510267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:26.975 [2024-10-08 10:58:47.510274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:26.975 [2024-10-08 10:58:47.510282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:26.975 [2024-10-08 10:58:47.510291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:26.975 [2024-10-08 10:58:47.510298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:26.975 [2024-10-08 10:58:47.510306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:26.975 [2024-10-08 10:58:47.510313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:26.975 [2024-10-08 10:58:47.510320] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:26.975 [2024-10-08 10:58:47.510329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:26.976 [2024-10-08 10:58:47.510337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:26.976 [2024-10-08 10:58:47.510345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:26.976 [2024-10-08 10:58:47.510355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:26.976 [2024-10-08 10:58:47.510363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:26.976 [2024-10-08 10:58:47.510370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:26.976 [2024-10-08 10:58:47.510378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:26.976 [2024-10-08 10:58:47.510388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:26.976 [2024-10-08 10:58:47.510395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:26.976 [2024-10-08 10:58:47.510404] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:26.976 [2024-10-08 10:58:47.510414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:26.976 [2024-10-08 10:58:47.510423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:26.976 [2024-10-08 10:58:47.510432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:26.976 [2024-10-08 10:58:47.510440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:26.976 [2024-10-08 10:58:47.510448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:26.976 [2024-10-08 10:58:47.510456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:26.976 [2024-10-08 10:58:47.510464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:26.976 [2024-10-08 10:58:47.510472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:26.976 [2024-10-08 10:58:47.510480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:26.976 [2024-10-08 10:58:47.510488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:26.976 [2024-10-08 10:58:47.510496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:26.976 [2024-10-08 10:58:47.510504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:26.976 [2024-10-08 10:58:47.510513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:26.976 [2024-10-08 10:58:47.510521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:26.976 [2024-10-08 10:58:47.510529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:26.976 [2024-10-08 10:58:47.510536] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:26.976 [2024-10-08 10:58:47.510543] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:26.976 [2024-10-08 10:58:47.510556] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:26.976 [2024-10-08 10:58:47.510564] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:26.976 [2024-10-08 10:58:47.510571] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:26.976 [2024-10-08 10:58:47.510578] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:26.976 [2024-10-08 10:58:47.510585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.976 [2024-10-08 10:58:47.510592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:26.976 [2024-10-08 10:58:47.510600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:27:26.976 [2024-10-08 10:58:47.510607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.976 [2024-10-08 10:58:47.525915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.976 [2024-10-08 10:58:47.525957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:26.976 [2024-10-08 10:58:47.525974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.242 ms 00:27:26.976 [2024-10-08 10:58:47.525984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.976 [2024-10-08 10:58:47.526096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.976 [2024-10-08 10:58:47.526108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:26.976 [2024-10-08 10:58:47.526119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:27:26.976 [2024-10-08 10:58:47.526127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.976 [2024-10-08 10:58:47.534763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.976 [2024-10-08 10:58:47.534810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:26.976 [2024-10-08 10:58:47.534819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.568 ms 00:27:26.976 [2024-10-08 10:58:47.534826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.976 [2024-10-08 10:58:47.534856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.976 [2024-10-08 10:58:47.534863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:26.976 [2024-10-08 10:58:47.534872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:27:26.976 [2024-10-08 10:58:47.534879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.976 [2024-10-08 10:58:47.534946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.976 [2024-10-08 10:58:47.534973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:26.976 [2024-10-08 10:58:47.534983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:27:26.976 [2024-10-08 10:58:47.534994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.976 [2024-10-08 10:58:47.535102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.976 [2024-10-08 10:58:47.535113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:26.976 [2024-10-08 10:58:47.535121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:27:26.976 [2024-10-08 10:58:47.535130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.976 [2024-10-08 10:58:47.539459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.976 [2024-10-08 10:58:47.539488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:26.976 [2024-10-08 10:58:47.539496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.310 ms 00:27:26.976 [2024-10-08 10:58:47.539508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.976 [2024-10-08 10:58:47.539606] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:26.976 [2024-10-08 10:58:47.539621] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:26.976 [2024-10-08 10:58:47.539629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.976 [2024-10-08 10:58:47.539637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:26.976 [2024-10-08 10:58:47.539644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:27:26.976 [2024-10-08 10:58:47.539651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.237 [2024-10-08 10:58:47.552062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.237 [2024-10-08 10:58:47.552088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:27.237 [2024-10-08 10:58:47.552104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.394 ms 00:27:27.237 [2024-10-08 10:58:47.552115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.237 [2024-10-08 10:58:47.552222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.237 [2024-10-08 10:58:47.552230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:27.237 [2024-10-08 10:58:47.552237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:27:27.237 [2024-10-08 10:58:47.552248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.237 [2024-10-08 10:58:47.552289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.237 [2024-10-08 10:58:47.552298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:27.237 [2024-10-08 10:58:47.552309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:27:27.237 [2024-10-08 10:58:47.552317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.237 [2024-10-08 10:58:47.552604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.237 [2024-10-08 10:58:47.552619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:27.237 [2024-10-08 10:58:47.552627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:27:27.237 [2024-10-08 10:58:47.552638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.237 [2024-10-08 10:58:47.552653] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:27:27.237 [2024-10-08 10:58:47.552662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.237 [2024-10-08 10:58:47.552669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:27.237 [2024-10-08 10:58:47.552677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:27.237 [2024-10-08 10:58:47.552686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.237 [2024-10-08 10:58:47.560515] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:27.237 [2024-10-08 10:58:47.560639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.237 [2024-10-08 10:58:47.560649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:27.237 [2024-10-08 10:58:47.560657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.937 ms 00:27:27.237 [2024-10-08 10:58:47.560664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.237 [2024-10-08 10:58:47.562938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.237 [2024-10-08 10:58:47.562961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:27.237 [2024-10-08 10:58:47.562971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.257 ms 00:27:27.237 [2024-10-08 10:58:47.562978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.237 [2024-10-08 10:58:47.563038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.237 [2024-10-08 10:58:47.563048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:27.237 [2024-10-08 10:58:47.563059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:27:27.237 [2024-10-08 10:58:47.563066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.237 [2024-10-08 10:58:47.563100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.237 [2024-10-08 10:58:47.563109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:27.237 [2024-10-08 10:58:47.563116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:27.237 [2024-10-08 10:58:47.563123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.237 [2024-10-08 10:58:47.563150] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:27.237 [2024-10-08 10:58:47.563162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.237 [2024-10-08 10:58:47.563169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:27.237 [2024-10-08 10:58:47.563176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:27.237 [2024-10-08 10:58:47.563184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.237 [2024-10-08 10:58:47.567197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.237 [2024-10-08 10:58:47.567234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:27.237 [2024-10-08 10:58:47.567244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.995 ms 00:27:27.237 [2024-10-08 10:58:47.567251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.237 [2024-10-08 10:58:47.567319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.237 [2024-10-08 10:58:47.567329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:27.237 [2024-10-08 10:58:47.567337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:27:27.237 [2024-10-08 10:58:47.567344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.237 [2024-10-08 10:58:47.568159] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 61.281 ms, result 0 00:27:28.179  [2024-10-08T10:58:50.142Z] Copying: 22/1024 [MB] (22 MBps) [2024-10-08T10:58:51.088Z] Copying: 49/1024 [MB] (26 MBps) [2024-10-08T10:58:52.094Z] Copying: 76/1024 [MB] (27 MBps) [2024-10-08T10:58:53.039Z] Copying: 96/1024 [MB] (19 MBps) [2024-10-08T10:58:53.983Z] Copying: 119/1024 [MB] (22 MBps) [2024-10-08T10:58:54.931Z] Copying: 130/1024 [MB] (11 MBps) [2024-10-08T10:58:55.875Z] Copying: 142/1024 [MB] (11 MBps) [2024-10-08T10:58:56.819Z] Copying: 153/1024 [MB] (11 MBps) [2024-10-08T10:58:57.758Z] Copying: 164/1024 [MB] (11 MBps) [2024-10-08T10:58:59.133Z] Copying: 196/1024 [MB] (32 MBps) [2024-10-08T10:59:00.067Z] Copying: 244/1024 [MB] (47 MBps) [2024-10-08T10:59:01.001Z] Copying: 294/1024 [MB] (50 MBps) [2024-10-08T10:59:01.935Z] Copying: 346/1024 [MB] (51 MBps) [2024-10-08T10:59:02.870Z] Copying: 396/1024 [MB] (50 MBps) [2024-10-08T10:59:03.802Z] Copying: 449/1024 [MB] (52 MBps) [2024-10-08T10:59:05.179Z] Copying: 499/1024 [MB] (49 MBps) [2024-10-08T10:59:05.751Z] Copying: 551/1024 [MB] (51 MBps) [2024-10-08T10:59:07.134Z] Copying: 571/1024 [MB] (20 MBps) [2024-10-08T10:59:08.075Z] Copying: 599/1024 [MB] (27 MBps) [2024-10-08T10:59:09.017Z] Copying: 625/1024 [MB] (25 MBps) [2024-10-08T10:59:09.960Z] Copying: 649/1024 [MB] (23 MBps) [2024-10-08T10:59:10.902Z] Copying: 673/1024 [MB] (24 MBps) [2024-10-08T10:59:11.837Z] Copying: 694/1024 [MB] (21 MBps) [2024-10-08T10:59:12.777Z] Copying: 733/1024 [MB] (38 MBps) [2024-10-08T10:59:14.165Z] Copying: 754/1024 [MB] (21 MBps) [2024-10-08T10:59:14.746Z] Copying: 783/1024 [MB] (28 MBps) [2024-10-08T10:59:16.128Z] Copying: 805/1024 [MB] (21 MBps) [2024-10-08T10:59:17.072Z] Copying: 839/1024 [MB] (34 MBps) [2024-10-08T10:59:18.014Z] Copying: 858/1024 [MB] (19 MBps) [2024-10-08T10:59:18.955Z] Copying: 879/1024 [MB] (21 MBps) [2024-10-08T10:59:19.896Z] Copying: 905/1024 [MB] (25 MBps) [2024-10-08T10:59:20.866Z] Copying: 927/1024 [MB] (21 MBps) [2024-10-08T10:59:21.810Z] Copying: 946/1024 [MB] (19 MBps) [2024-10-08T10:59:22.754Z] Copying: 970/1024 [MB] (23 MBps) [2024-10-08T10:59:24.147Z] Copying: 986/1024 [MB] (16 MBps) [2024-10-08T10:59:25.091Z] Copying: 1007/1024 [MB] (21 MBps) [2024-10-08T10:59:25.091Z] Copying: 1023/1024 [MB] (16 MBps) [2024-10-08T10:59:25.091Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-10-08 10:59:24.859083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.514 [2024-10-08 10:59:24.859311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:04.514 [2024-10-08 10:59:24.859442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:04.514 [2024-10-08 10:59:24.859475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.514 [2024-10-08 10:59:24.859521] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:04.514 [2024-10-08 10:59:24.860027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.514 [2024-10-08 10:59:24.860066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:04.514 [2024-10-08 10:59:24.860088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.457 ms 00:28:04.514 [2024-10-08 10:59:24.860223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.514 [2024-10-08 10:59:24.860472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.514 [2024-10-08 10:59:24.860598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:04.514 [2024-10-08 10:59:24.860704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:28:04.514 [2024-10-08 10:59:24.860728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.514 [2024-10-08 10:59:24.860767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.514 [2024-10-08 10:59:24.860805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:28:04.514 [2024-10-08 10:59:24.860828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:04.514 [2024-10-08 10:59:24.860847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.514 [2024-10-08 10:59:24.860904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.514 [2024-10-08 10:59:24.860969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:28:04.514 [2024-10-08 10:59:24.861015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:04.514 [2024-10-08 10:59:24.861033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.514 [2024-10-08 10:59:24.861060] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:04.514 [2024-10-08 10:59:24.861085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:04.514 [2024-10-08 10:59:24.861117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:04.514 [2024-10-08 10:59:24.861147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:04.514 [2024-10-08 10:59:24.861177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:04.514 [2024-10-08 10:59:24.861257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:04.514 [2024-10-08 10:59:24.861289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:04.514 [2024-10-08 10:59:24.861319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:04.514 [2024-10-08 10:59:24.861350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:04.514 [2024-10-08 10:59:24.861380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:04.514 [2024-10-08 10:59:24.861444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:04.514 [2024-10-08 10:59:24.861477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.861996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.862003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.862011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.862019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.862026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.862034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.862042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.862051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.862059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.862066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.862074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.862082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.862089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.862097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.862104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.862112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.862119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.862127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.862142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.862150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.862157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.862165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.862173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.862181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.862189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.862196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.862205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.862213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.862220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.862228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.862235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.862243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.862250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:04.515 [2024-10-08 10:59:24.862258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:04.516 [2024-10-08 10:59:24.862275] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:04.516 [2024-10-08 10:59:24.862283] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 849f357b-59b2-4ad1-93bf-cdd686aa264c 00:28:04.516 [2024-10-08 10:59:24.862293] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:04.516 [2024-10-08 10:59:24.862305] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:28:04.516 [2024-10-08 10:59:24.862313] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:04.516 [2024-10-08 10:59:24.862324] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:04.516 [2024-10-08 10:59:24.862331] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:04.516 [2024-10-08 10:59:24.862342] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:04.516 [2024-10-08 10:59:24.862352] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:04.516 [2024-10-08 10:59:24.862359] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:04.516 [2024-10-08 10:59:24.862366] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:04.516 [2024-10-08 10:59:24.862373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.516 [2024-10-08 10:59:24.862381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:04.516 [2024-10-08 10:59:24.862389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.313 ms 00:28:04.516 [2024-10-08 10:59:24.862397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.516 [2024-10-08 10:59:24.864812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.516 [2024-10-08 10:59:24.864900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:04.516 [2024-10-08 10:59:24.864958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.399 ms 00:28:04.516 [2024-10-08 10:59:24.864986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.516 [2024-10-08 10:59:24.865074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.516 [2024-10-08 10:59:24.865128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:04.516 [2024-10-08 10:59:24.865155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:28:04.516 [2024-10-08 10:59:24.865179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.516 [2024-10-08 10:59:24.869524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:04.516 [2024-10-08 10:59:24.869615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:04.516 [2024-10-08 10:59:24.869661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:04.516 [2024-10-08 10:59:24.869682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.516 [2024-10-08 10:59:24.869741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:04.516 [2024-10-08 10:59:24.869761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:04.516 [2024-10-08 10:59:24.869780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:04.516 [2024-10-08 10:59:24.869826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.516 [2024-10-08 10:59:24.869889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:04.516 [2024-10-08 10:59:24.869913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:04.516 [2024-10-08 10:59:24.869989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:04.516 [2024-10-08 10:59:24.870012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.516 [2024-10-08 10:59:24.870040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:04.516 [2024-10-08 10:59:24.870060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:04.516 [2024-10-08 10:59:24.870079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:04.516 [2024-10-08 10:59:24.870104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.516 [2024-10-08 10:59:24.878673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:04.516 [2024-10-08 10:59:24.878710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:04.516 [2024-10-08 10:59:24.878729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:04.516 [2024-10-08 10:59:24.878737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.516 [2024-10-08 10:59:24.886141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:04.516 [2024-10-08 10:59:24.886173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:04.516 [2024-10-08 10:59:24.886190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:04.516 [2024-10-08 10:59:24.886197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.516 [2024-10-08 10:59:24.886220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:04.516 [2024-10-08 10:59:24.886229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:04.516 [2024-10-08 10:59:24.886236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:04.516 [2024-10-08 10:59:24.886243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.516 [2024-10-08 10:59:24.886282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:04.516 [2024-10-08 10:59:24.886290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:04.516 [2024-10-08 10:59:24.886297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:04.516 [2024-10-08 10:59:24.886305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.516 [2024-10-08 10:59:24.886352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:04.516 [2024-10-08 10:59:24.886364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:04.516 [2024-10-08 10:59:24.886372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:04.516 [2024-10-08 10:59:24.886379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.516 [2024-10-08 10:59:24.886399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:04.516 [2024-10-08 10:59:24.886407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:04.516 [2024-10-08 10:59:24.886415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:04.516 [2024-10-08 10:59:24.886423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.516 [2024-10-08 10:59:24.886456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:04.516 [2024-10-08 10:59:24.886467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:04.516 [2024-10-08 10:59:24.886481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:04.516 [2024-10-08 10:59:24.886491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.516 [2024-10-08 10:59:24.886528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:04.516 [2024-10-08 10:59:24.886537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:04.516 [2024-10-08 10:59:24.886545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:04.516 [2024-10-08 10:59:24.886552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.516 [2024-10-08 10:59:24.886663] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 27.552 ms, result 0 00:28:04.516 00:28:04.516 00:28:04.516 10:59:25 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:07.062 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:07.062 10:59:27 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:28:07.062 [2024-10-08 10:59:27.267200] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:28:07.062 [2024-10-08 10:59:27.267319] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94820 ] 00:28:07.062 [2024-10-08 10:59:27.396295] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:07.062 [2024-10-08 10:59:27.418163] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.062 [2024-10-08 10:59:27.450499] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.062 [2024-10-08 10:59:27.536145] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:07.062 [2024-10-08 10:59:27.536208] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:07.324 [2024-10-08 10:59:27.693879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.325 [2024-10-08 10:59:27.693918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:07.325 [2024-10-08 10:59:27.693934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:07.325 [2024-10-08 10:59:27.693942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.325 [2024-10-08 10:59:27.693995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.325 [2024-10-08 10:59:27.694005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:07.325 [2024-10-08 10:59:27.694013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:28:07.325 [2024-10-08 10:59:27.694025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.325 [2024-10-08 10:59:27.694043] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:07.325 [2024-10-08 10:59:27.694282] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:07.325 [2024-10-08 10:59:27.694296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.325 [2024-10-08 10:59:27.694306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:07.325 [2024-10-08 10:59:27.694314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:28:07.325 [2024-10-08 10:59:27.694324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.325 [2024-10-08 10:59:27.694564] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:28:07.325 [2024-10-08 10:59:27.694586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.325 [2024-10-08 10:59:27.694593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:07.325 [2024-10-08 10:59:27.694601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:28:07.325 [2024-10-08 10:59:27.694608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.325 [2024-10-08 10:59:27.694656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.325 [2024-10-08 10:59:27.694668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:07.325 [2024-10-08 10:59:27.694675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:28:07.325 [2024-10-08 10:59:27.694683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.325 [2024-10-08 10:59:27.694967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.325 [2024-10-08 10:59:27.694979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:07.325 [2024-10-08 10:59:27.694986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:28:07.325 [2024-10-08 10:59:27.694997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.325 [2024-10-08 10:59:27.695067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.325 [2024-10-08 10:59:27.695075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:07.325 [2024-10-08 10:59:27.695082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:28:07.325 [2024-10-08 10:59:27.695092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.325 [2024-10-08 10:59:27.695112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.325 [2024-10-08 10:59:27.695120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:07.325 [2024-10-08 10:59:27.695127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:07.325 [2024-10-08 10:59:27.695134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.325 [2024-10-08 10:59:27.695156] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:07.325 [2024-10-08 10:59:27.696529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.325 [2024-10-08 10:59:27.696555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:07.325 [2024-10-08 10:59:27.696565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.380 ms 00:28:07.325 [2024-10-08 10:59:27.696572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.325 [2024-10-08 10:59:27.696601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.325 [2024-10-08 10:59:27.696609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:07.325 [2024-10-08 10:59:27.696622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:07.325 [2024-10-08 10:59:27.696628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.325 [2024-10-08 10:59:27.696645] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:07.325 [2024-10-08 10:59:27.696662] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:07.325 [2024-10-08 10:59:27.696697] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:07.325 [2024-10-08 10:59:27.696711] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:07.325 [2024-10-08 10:59:27.696833] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:07.325 [2024-10-08 10:59:27.696844] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:07.325 [2024-10-08 10:59:27.696854] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:07.325 [2024-10-08 10:59:27.696867] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:07.325 [2024-10-08 10:59:27.696875] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:07.325 [2024-10-08 10:59:27.696888] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:07.325 [2024-10-08 10:59:27.696895] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:07.325 [2024-10-08 10:59:27.696903] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:07.325 [2024-10-08 10:59:27.696910] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:07.325 [2024-10-08 10:59:27.696917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.325 [2024-10-08 10:59:27.696924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:07.325 [2024-10-08 10:59:27.696931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:28:07.325 [2024-10-08 10:59:27.696941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.325 [2024-10-08 10:59:27.697023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.325 [2024-10-08 10:59:27.697031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:07.325 [2024-10-08 10:59:27.697042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:07.325 [2024-10-08 10:59:27.697051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.325 [2024-10-08 10:59:27.697145] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:07.325 [2024-10-08 10:59:27.697155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:07.325 [2024-10-08 10:59:27.697163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:07.325 [2024-10-08 10:59:27.697172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:07.325 [2024-10-08 10:59:27.697181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:07.325 [2024-10-08 10:59:27.697189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:07.325 [2024-10-08 10:59:27.697197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:07.325 [2024-10-08 10:59:27.697205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:07.325 [2024-10-08 10:59:27.697213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:07.325 [2024-10-08 10:59:27.697221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:07.325 [2024-10-08 10:59:27.697233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:07.325 [2024-10-08 10:59:27.697240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:07.325 [2024-10-08 10:59:27.697247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:07.325 [2024-10-08 10:59:27.697255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:07.325 [2024-10-08 10:59:27.697262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:07.325 [2024-10-08 10:59:27.697270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:07.325 [2024-10-08 10:59:27.697277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:07.325 [2024-10-08 10:59:27.697285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:07.325 [2024-10-08 10:59:27.697292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:07.325 [2024-10-08 10:59:27.697301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:07.325 [2024-10-08 10:59:27.697309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:07.325 [2024-10-08 10:59:27.697316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:07.325 [2024-10-08 10:59:27.697323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:07.325 [2024-10-08 10:59:27.697331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:07.325 [2024-10-08 10:59:27.697338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:07.325 [2024-10-08 10:59:27.697345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:07.325 [2024-10-08 10:59:27.697352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:07.325 [2024-10-08 10:59:27.697359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:07.325 [2024-10-08 10:59:27.697367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:07.325 [2024-10-08 10:59:27.697374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:07.325 [2024-10-08 10:59:27.697381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:07.325 [2024-10-08 10:59:27.697388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:07.325 [2024-10-08 10:59:27.697396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:07.325 [2024-10-08 10:59:27.697403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:07.325 [2024-10-08 10:59:27.697411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:07.325 [2024-10-08 10:59:27.697423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:07.325 [2024-10-08 10:59:27.697431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:07.325 [2024-10-08 10:59:27.697438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:07.325 [2024-10-08 10:59:27.697446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:07.325 [2024-10-08 10:59:27.697453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:07.325 [2024-10-08 10:59:27.697461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:07.325 [2024-10-08 10:59:27.697468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:07.325 [2024-10-08 10:59:27.697476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:07.325 [2024-10-08 10:59:27.697483] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:07.325 [2024-10-08 10:59:27.697492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:07.325 [2024-10-08 10:59:27.697499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:07.325 [2024-10-08 10:59:27.697507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:07.325 [2024-10-08 10:59:27.697517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:07.325 [2024-10-08 10:59:27.697525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:07.325 [2024-10-08 10:59:27.697532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:07.325 [2024-10-08 10:59:27.697540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:07.325 [2024-10-08 10:59:27.697549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:07.325 [2024-10-08 10:59:27.697557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:07.325 [2024-10-08 10:59:27.697566] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:07.325 [2024-10-08 10:59:27.697576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:07.325 [2024-10-08 10:59:27.697584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:07.325 [2024-10-08 10:59:27.697592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:07.325 [2024-10-08 10:59:27.697600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:07.325 [2024-10-08 10:59:27.697608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:07.325 [2024-10-08 10:59:27.697616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:07.325 [2024-10-08 10:59:27.697624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:07.325 [2024-10-08 10:59:27.697631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:07.325 [2024-10-08 10:59:27.697639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:07.325 [2024-10-08 10:59:27.697647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:07.325 [2024-10-08 10:59:27.697657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:07.325 [2024-10-08 10:59:27.697665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:07.325 [2024-10-08 10:59:27.697672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:07.325 [2024-10-08 10:59:27.697682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:07.325 [2024-10-08 10:59:27.697690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:07.325 [2024-10-08 10:59:27.697698] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:07.325 [2024-10-08 10:59:27.697707] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:07.325 [2024-10-08 10:59:27.697719] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:07.325 [2024-10-08 10:59:27.697728] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:07.325 [2024-10-08 10:59:27.697736] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:07.325 [2024-10-08 10:59:27.697744] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:07.325 [2024-10-08 10:59:27.697752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.325 [2024-10-08 10:59:27.697764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:07.325 [2024-10-08 10:59:27.697772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.675 ms 00:28:07.325 [2024-10-08 10:59:27.697780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.325 [2024-10-08 10:59:27.715006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.325 [2024-10-08 10:59:27.715053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:07.325 [2024-10-08 10:59:27.715073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.125 ms 00:28:07.325 [2024-10-08 10:59:27.715084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.325 [2024-10-08 10:59:27.715207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.325 [2024-10-08 10:59:27.715220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:07.325 [2024-10-08 10:59:27.715231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:28:07.325 [2024-10-08 10:59:27.715241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.325 [2024-10-08 10:59:27.724016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.325 [2024-10-08 10:59:27.724143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:07.325 [2024-10-08 10:59:27.724157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.699 ms 00:28:07.325 [2024-10-08 10:59:27.724165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.325 [2024-10-08 10:59:27.724192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.325 [2024-10-08 10:59:27.724200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:07.325 [2024-10-08 10:59:27.724208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:07.325 [2024-10-08 10:59:27.724215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.325 [2024-10-08 10:59:27.724280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.325 [2024-10-08 10:59:27.724290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:07.325 [2024-10-08 10:59:27.724301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:28:07.325 [2024-10-08 10:59:27.724308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.325 [2024-10-08 10:59:27.724415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.325 [2024-10-08 10:59:27.724424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:07.325 [2024-10-08 10:59:27.724431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:28:07.325 [2024-10-08 10:59:27.724440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.325 [2024-10-08 10:59:27.728778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.325 [2024-10-08 10:59:27.728821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:07.325 [2024-10-08 10:59:27.728830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.319 ms 00:28:07.325 [2024-10-08 10:59:27.728841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.325 [2024-10-08 10:59:27.728943] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:07.325 [2024-10-08 10:59:27.728955] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:07.325 [2024-10-08 10:59:27.728965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.325 [2024-10-08 10:59:27.728972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:07.325 [2024-10-08 10:59:27.728980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:28:07.325 [2024-10-08 10:59:27.728987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.325 [2024-10-08 10:59:27.741397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.325 [2024-10-08 10:59:27.741423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:07.325 [2024-10-08 10:59:27.741440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.392 ms 00:28:07.325 [2024-10-08 10:59:27.741450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.325 [2024-10-08 10:59:27.741555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.325 [2024-10-08 10:59:27.741567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:07.325 [2024-10-08 10:59:27.741575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:28:07.326 [2024-10-08 10:59:27.741582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.326 [2024-10-08 10:59:27.741623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.326 [2024-10-08 10:59:27.741632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:07.326 [2024-10-08 10:59:27.741643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:28:07.326 [2024-10-08 10:59:27.741650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.326 [2024-10-08 10:59:27.741962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.326 [2024-10-08 10:59:27.741981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:07.326 [2024-10-08 10:59:27.741989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:28:07.326 [2024-10-08 10:59:27.742000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.326 [2024-10-08 10:59:27.742015] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:28:07.326 [2024-10-08 10:59:27.742024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.326 [2024-10-08 10:59:27.742031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:07.326 [2024-10-08 10:59:27.742038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:07.326 [2024-10-08 10:59:27.742047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.326 [2024-10-08 10:59:27.749896] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:07.326 [2024-10-08 10:59:27.750021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.326 [2024-10-08 10:59:27.750031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:07.326 [2024-10-08 10:59:27.750039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.957 ms 00:28:07.326 [2024-10-08 10:59:27.750046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.326 [2024-10-08 10:59:27.752344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.326 [2024-10-08 10:59:27.752367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:07.326 [2024-10-08 10:59:27.752377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.281 ms 00:28:07.326 [2024-10-08 10:59:27.752385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.326 [2024-10-08 10:59:27.752450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.326 [2024-10-08 10:59:27.752463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:07.326 [2024-10-08 10:59:27.752471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:28:07.326 [2024-10-08 10:59:27.752478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.326 [2024-10-08 10:59:27.752515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.326 [2024-10-08 10:59:27.752524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:07.326 [2024-10-08 10:59:27.752537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:07.326 [2024-10-08 10:59:27.752544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.326 [2024-10-08 10:59:27.752570] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:07.326 [2024-10-08 10:59:27.752580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.326 [2024-10-08 10:59:27.752587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:07.326 [2024-10-08 10:59:27.752594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:07.326 [2024-10-08 10:59:27.752601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.326 [2024-10-08 10:59:27.756776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.326 [2024-10-08 10:59:27.756825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:07.326 [2024-10-08 10:59:27.756835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.156 ms 00:28:07.326 [2024-10-08 10:59:27.756847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.326 [2024-10-08 10:59:27.756909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:07.326 [2024-10-08 10:59:27.756919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:07.326 [2024-10-08 10:59:27.756926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:28:07.326 [2024-10-08 10:59:27.756934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.326 [2024-10-08 10:59:27.758190] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 63.905 ms, result 0 00:28:08.278  [2024-10-08T10:59:29.798Z] Copying: 10/1024 [MB] (10 MBps) [2024-10-08T10:59:31.186Z] Copying: 31/1024 [MB] (20 MBps) [2024-10-08T10:59:32.130Z] Copying: 42272/1048576 [kB] (10200 kBps) [2024-10-08T10:59:33.074Z] Copying: 51/1024 [MB] (10 MBps) [2024-10-08T10:59:34.093Z] Copying: 62/1024 [MB] (10 MBps) [2024-10-08T10:59:35.049Z] Copying: 74/1024 [MB] (11 MBps) [2024-10-08T10:59:35.993Z] Copying: 85/1024 [MB] (11 MBps) [2024-10-08T10:59:36.937Z] Copying: 96/1024 [MB] (11 MBps) [2024-10-08T10:59:37.876Z] Copying: 108/1024 [MB] (11 MBps) [2024-10-08T10:59:38.814Z] Copying: 139/1024 [MB] (31 MBps) [2024-10-08T10:59:40.202Z] Copying: 193/1024 [MB] (53 MBps) [2024-10-08T10:59:40.775Z] Copying: 209/1024 [MB] (16 MBps) [2024-10-08T10:59:42.161Z] Copying: 226/1024 [MB] (16 MBps) [2024-10-08T10:59:43.103Z] Copying: 249/1024 [MB] (23 MBps) [2024-10-08T10:59:44.043Z] Copying: 270/1024 [MB] (21 MBps) [2024-10-08T10:59:44.986Z] Copying: 292/1024 [MB] (21 MBps) [2024-10-08T10:59:45.927Z] Copying: 307/1024 [MB] (15 MBps) [2024-10-08T10:59:46.869Z] Copying: 322/1024 [MB] (15 MBps) [2024-10-08T10:59:47.826Z] Copying: 344/1024 [MB] (22 MBps) [2024-10-08T10:59:48.771Z] Copying: 367/1024 [MB] (22 MBps) [2024-10-08T10:59:50.158Z] Copying: 386/1024 [MB] (18 MBps) [2024-10-08T10:59:51.102Z] Copying: 408/1024 [MB] (22 MBps) [2024-10-08T10:59:52.044Z] Copying: 430/1024 [MB] (22 MBps) [2024-10-08T10:59:52.987Z] Copying: 454/1024 [MB] (23 MBps) [2024-10-08T10:59:53.927Z] Copying: 478/1024 [MB] (23 MBps) [2024-10-08T10:59:54.871Z] Copying: 510/1024 [MB] (31 MBps) [2024-10-08T10:59:55.835Z] Copying: 534/1024 [MB] (24 MBps) [2024-10-08T10:59:56.778Z] Copying: 558/1024 [MB] (24 MBps) [2024-10-08T10:59:58.166Z] Copying: 581/1024 [MB] (22 MBps) [2024-10-08T10:59:59.106Z] Copying: 603/1024 [MB] (22 MBps) [2024-10-08T11:00:00.049Z] Copying: 627/1024 [MB] (24 MBps) [2024-10-08T11:00:00.993Z] Copying: 646/1024 [MB] (18 MBps) [2024-10-08T11:00:02.001Z] Copying: 666/1024 [MB] (19 MBps) [2024-10-08T11:00:02.944Z] Copying: 680/1024 [MB] (14 MBps) [2024-10-08T11:00:03.886Z] Copying: 696/1024 [MB] (15 MBps) [2024-10-08T11:00:04.826Z] Copying: 713/1024 [MB] (17 MBps) [2024-10-08T11:00:06.211Z] Copying: 737/1024 [MB] (24 MBps) [2024-10-08T11:00:06.782Z] Copying: 761/1024 [MB] (23 MBps) [2024-10-08T11:00:08.165Z] Copying: 786/1024 [MB] (25 MBps) [2024-10-08T11:00:09.105Z] Copying: 811/1024 [MB] (24 MBps) [2024-10-08T11:00:10.049Z] Copying: 834/1024 [MB] (23 MBps) [2024-10-08T11:00:10.993Z] Copying: 856/1024 [MB] (21 MBps) [2024-10-08T11:00:11.938Z] Copying: 872/1024 [MB] (16 MBps) [2024-10-08T11:00:12.875Z] Copying: 884/1024 [MB] (11 MBps) [2024-10-08T11:00:13.818Z] Copying: 920/1024 [MB] (36 MBps) [2024-10-08T11:00:15.253Z] Copying: 939/1024 [MB] (19 MBps) [2024-10-08T11:00:15.823Z] Copying: 956/1024 [MB] (16 MBps) [2024-10-08T11:00:17.205Z] Copying: 978/1024 [MB] (22 MBps) [2024-10-08T11:00:17.778Z] Copying: 998/1024 [MB] (19 MBps) [2024-10-08T11:00:19.164Z] Copying: 1020/1024 [MB] (22 MBps) [2024-10-08T11:00:19.164Z] Copying: 1048436/1048576 [kB] (3148 kBps) [2024-10-08T11:00:19.164Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-10-08 11:00:18.953321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.587 [2024-10-08 11:00:18.953482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:58.587 [2024-10-08 11:00:18.953502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:58.587 [2024-10-08 11:00:18.953510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.587 [2024-10-08 11:00:18.956363] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:58.587 [2024-10-08 11:00:18.958147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.587 [2024-10-08 11:00:18.958249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:58.587 [2024-10-08 11:00:18.958303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.748 ms 00:28:58.587 [2024-10-08 11:00:18.958325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.587 [2024-10-08 11:00:18.968707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.587 [2024-10-08 11:00:18.968819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:58.587 [2024-10-08 11:00:18.968872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.258 ms 00:28:58.587 [2024-10-08 11:00:18.968895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.588 [2024-10-08 11:00:18.968935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.588 [2024-10-08 11:00:18.968957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:28:58.588 [2024-10-08 11:00:18.968978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:58.588 [2024-10-08 11:00:18.968996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.588 [2024-10-08 11:00:18.969052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.588 [2024-10-08 11:00:18.969117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:28:58.588 [2024-10-08 11:00:18.969141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:58.588 [2024-10-08 11:00:18.969164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.588 [2024-10-08 11:00:18.969180] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:58.588 [2024-10-08 11:00:18.969191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129536 / 261120 wr_cnt: 1 state: open 00:28:58.588 [2024-10-08 11:00:18.969200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:58.588 [2024-10-08 11:00:18.969759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:58.589 [2024-10-08 11:00:18.969770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:58.589 [2024-10-08 11:00:18.969778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:58.589 [2024-10-08 11:00:18.969785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:58.589 [2024-10-08 11:00:18.969792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:58.589 [2024-10-08 11:00:18.969817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:58.589 [2024-10-08 11:00:18.969825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:58.589 [2024-10-08 11:00:18.969832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:58.589 [2024-10-08 11:00:18.969840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:58.589 [2024-10-08 11:00:18.969847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:58.589 [2024-10-08 11:00:18.969856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:58.589 [2024-10-08 11:00:18.969863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:58.589 [2024-10-08 11:00:18.969871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:58.589 [2024-10-08 11:00:18.969878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:58.589 [2024-10-08 11:00:18.969886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:58.589 [2024-10-08 11:00:18.969893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:58.589 [2024-10-08 11:00:18.969901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:58.589 [2024-10-08 11:00:18.969908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:58.589 [2024-10-08 11:00:18.969916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:58.589 [2024-10-08 11:00:18.969923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:58.589 [2024-10-08 11:00:18.969930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:58.589 [2024-10-08 11:00:18.969938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:58.589 [2024-10-08 11:00:18.969945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:58.589 [2024-10-08 11:00:18.969953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:58.589 [2024-10-08 11:00:18.969968] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:58.589 [2024-10-08 11:00:18.969975] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 849f357b-59b2-4ad1-93bf-cdd686aa264c 00:28:58.589 [2024-10-08 11:00:18.969989] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129536 00:28:58.589 [2024-10-08 11:00:18.969997] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129568 00:28:58.589 [2024-10-08 11:00:18.970004] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129536 00:28:58.589 [2024-10-08 11:00:18.970012] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:28:58.589 [2024-10-08 11:00:18.970022] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:58.589 [2024-10-08 11:00:18.970029] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:58.589 [2024-10-08 11:00:18.970051] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:58.589 [2024-10-08 11:00:18.970057] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:58.589 [2024-10-08 11:00:18.970063] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:58.589 [2024-10-08 11:00:18.970070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.589 [2024-10-08 11:00:18.970081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:58.589 [2024-10-08 11:00:18.970089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.891 ms 00:28:58.589 [2024-10-08 11:00:18.970096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.589 [2024-10-08 11:00:18.971460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.589 [2024-10-08 11:00:18.971483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:58.589 [2024-10-08 11:00:18.971497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.351 ms 00:28:58.589 [2024-10-08 11:00:18.971505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.589 [2024-10-08 11:00:18.971583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.589 [2024-10-08 11:00:18.971592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:58.589 [2024-10-08 11:00:18.971601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:28:58.589 [2024-10-08 11:00:18.971608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.589 [2024-10-08 11:00:18.975839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.589 [2024-10-08 11:00:18.975946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:58.589 [2024-10-08 11:00:18.975964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.589 [2024-10-08 11:00:18.975972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.589 [2024-10-08 11:00:18.976019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.589 [2024-10-08 11:00:18.976027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:58.589 [2024-10-08 11:00:18.976035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.589 [2024-10-08 11:00:18.976046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.589 [2024-10-08 11:00:18.976079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.589 [2024-10-08 11:00:18.976088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:58.589 [2024-10-08 11:00:18.976096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.589 [2024-10-08 11:00:18.976106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.589 [2024-10-08 11:00:18.976120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.589 [2024-10-08 11:00:18.976127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:58.589 [2024-10-08 11:00:18.976135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.589 [2024-10-08 11:00:18.976145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.589 [2024-10-08 11:00:18.984270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.589 [2024-10-08 11:00:18.984307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:58.589 [2024-10-08 11:00:18.984323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.589 [2024-10-08 11:00:18.984331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.589 [2024-10-08 11:00:18.991522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.589 [2024-10-08 11:00:18.991645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:58.589 [2024-10-08 11:00:18.991659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.589 [2024-10-08 11:00:18.991667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.589 [2024-10-08 11:00:18.991709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.589 [2024-10-08 11:00:18.991718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:58.589 [2024-10-08 11:00:18.991727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.589 [2024-10-08 11:00:18.991734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.589 [2024-10-08 11:00:18.991762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.589 [2024-10-08 11:00:18.991770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:58.589 [2024-10-08 11:00:18.991783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.589 [2024-10-08 11:00:18.991791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.589 [2024-10-08 11:00:18.991974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.589 [2024-10-08 11:00:18.992047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:58.589 [2024-10-08 11:00:18.992057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.589 [2024-10-08 11:00:18.992065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.589 [2024-10-08 11:00:18.992094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.589 [2024-10-08 11:00:18.992104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:58.589 [2024-10-08 11:00:18.992111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.589 [2024-10-08 11:00:18.992122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.589 [2024-10-08 11:00:18.992155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.589 [2024-10-08 11:00:18.992168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:58.589 [2024-10-08 11:00:18.992181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.589 [2024-10-08 11:00:18.992188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.589 [2024-10-08 11:00:18.992228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:58.589 [2024-10-08 11:00:18.992238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:58.589 [2024-10-08 11:00:18.992245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:58.589 [2024-10-08 11:00:18.992252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.589 [2024-10-08 11:00:18.992359] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 41.264 ms, result 0 00:28:59.161 00:28:59.161 00:28:59.162 11:00:19 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:28:59.162 [2024-10-08 11:00:19.730364] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:28:59.162 [2024-10-08 11:00:19.730480] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95348 ] 00:28:59.422 [2024-10-08 11:00:19.858926] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:59.422 [2024-10-08 11:00:19.879419] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.422 [2024-10-08 11:00:19.911783] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.422 [2024-10-08 11:00:19.997727] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:59.683 [2024-10-08 11:00:19.997927] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:59.683 [2024-10-08 11:00:20.155206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.683 [2024-10-08 11:00:20.155249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:59.683 [2024-10-08 11:00:20.155264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:59.683 [2024-10-08 11:00:20.155272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.683 [2024-10-08 11:00:20.155318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.683 [2024-10-08 11:00:20.155329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:59.683 [2024-10-08 11:00:20.155337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:28:59.683 [2024-10-08 11:00:20.155344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.683 [2024-10-08 11:00:20.155362] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:59.683 [2024-10-08 11:00:20.155593] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:59.683 [2024-10-08 11:00:20.155607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.683 [2024-10-08 11:00:20.155614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:59.683 [2024-10-08 11:00:20.155622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:28:59.683 [2024-10-08 11:00:20.155632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.683 [2024-10-08 11:00:20.155901] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:28:59.683 [2024-10-08 11:00:20.155924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.683 [2024-10-08 11:00:20.155936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:59.683 [2024-10-08 11:00:20.155947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:28:59.683 [2024-10-08 11:00:20.155954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.683 [2024-10-08 11:00:20.156000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.683 [2024-10-08 11:00:20.156014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:59.683 [2024-10-08 11:00:20.156022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:28:59.683 [2024-10-08 11:00:20.156029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.683 [2024-10-08 11:00:20.156288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.683 [2024-10-08 11:00:20.156298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:59.683 [2024-10-08 11:00:20.156308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:28:59.683 [2024-10-08 11:00:20.156315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.683 [2024-10-08 11:00:20.156379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.683 [2024-10-08 11:00:20.156387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:59.683 [2024-10-08 11:00:20.156395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:28:59.683 [2024-10-08 11:00:20.156402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.683 [2024-10-08 11:00:20.156422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.683 [2024-10-08 11:00:20.156430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:59.683 [2024-10-08 11:00:20.156437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:59.683 [2024-10-08 11:00:20.156448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.683 [2024-10-08 11:00:20.156464] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:59.683 [2024-10-08 11:00:20.157883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.683 [2024-10-08 11:00:20.157898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:59.683 [2024-10-08 11:00:20.157907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.423 ms 00:28:59.683 [2024-10-08 11:00:20.157920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.683 [2024-10-08 11:00:20.157954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.683 [2024-10-08 11:00:20.157963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:59.683 [2024-10-08 11:00:20.157975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:59.683 [2024-10-08 11:00:20.157982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.684 [2024-10-08 11:00:20.157999] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:59.684 [2024-10-08 11:00:20.158015] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:59.684 [2024-10-08 11:00:20.158069] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:59.684 [2024-10-08 11:00:20.158087] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:59.684 [2024-10-08 11:00:20.158189] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:59.684 [2024-10-08 11:00:20.158200] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:59.684 [2024-10-08 11:00:20.158209] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:59.684 [2024-10-08 11:00:20.158219] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:59.684 [2024-10-08 11:00:20.158227] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:59.684 [2024-10-08 11:00:20.158241] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:59.684 [2024-10-08 11:00:20.158248] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:59.684 [2024-10-08 11:00:20.158254] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:59.684 [2024-10-08 11:00:20.158263] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:59.684 [2024-10-08 11:00:20.158271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.684 [2024-10-08 11:00:20.158282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:59.684 [2024-10-08 11:00:20.158290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:28:59.684 [2024-10-08 11:00:20.158299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.684 [2024-10-08 11:00:20.158381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.684 [2024-10-08 11:00:20.158388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:59.684 [2024-10-08 11:00:20.158395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:28:59.684 [2024-10-08 11:00:20.158404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.684 [2024-10-08 11:00:20.158507] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:59.684 [2024-10-08 11:00:20.158517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:59.684 [2024-10-08 11:00:20.158526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:59.684 [2024-10-08 11:00:20.158533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:59.684 [2024-10-08 11:00:20.158542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:59.684 [2024-10-08 11:00:20.158549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:59.684 [2024-10-08 11:00:20.158559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:59.684 [2024-10-08 11:00:20.158567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:59.684 [2024-10-08 11:00:20.158575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:59.684 [2024-10-08 11:00:20.158586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:59.684 [2024-10-08 11:00:20.158598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:59.684 [2024-10-08 11:00:20.158606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:59.684 [2024-10-08 11:00:20.158613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:59.684 [2024-10-08 11:00:20.158621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:59.684 [2024-10-08 11:00:20.158629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:59.684 [2024-10-08 11:00:20.158636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:59.684 [2024-10-08 11:00:20.158643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:59.684 [2024-10-08 11:00:20.158650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:59.684 [2024-10-08 11:00:20.158657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:59.684 [2024-10-08 11:00:20.158665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:59.684 [2024-10-08 11:00:20.158672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:59.684 [2024-10-08 11:00:20.158680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:59.684 [2024-10-08 11:00:20.158689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:59.684 [2024-10-08 11:00:20.158696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:59.684 [2024-10-08 11:00:20.158704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:59.684 [2024-10-08 11:00:20.158712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:59.684 [2024-10-08 11:00:20.158719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:59.684 [2024-10-08 11:00:20.158726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:59.684 [2024-10-08 11:00:20.158733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:59.684 [2024-10-08 11:00:20.158740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:59.684 [2024-10-08 11:00:20.158747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:59.684 [2024-10-08 11:00:20.158754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:59.684 [2024-10-08 11:00:20.158762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:59.684 [2024-10-08 11:00:20.158769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:59.684 [2024-10-08 11:00:20.158776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:59.684 [2024-10-08 11:00:20.158784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:59.684 [2024-10-08 11:00:20.158791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:59.684 [2024-10-08 11:00:20.158812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:59.684 [2024-10-08 11:00:20.158821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:59.684 [2024-10-08 11:00:20.158829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:59.684 [2024-10-08 11:00:20.158837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:59.684 [2024-10-08 11:00:20.158847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:59.684 [2024-10-08 11:00:20.158854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:59.684 [2024-10-08 11:00:20.158862] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:59.684 [2024-10-08 11:00:20.158870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:59.684 [2024-10-08 11:00:20.158878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:59.684 [2024-10-08 11:00:20.158886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:59.684 [2024-10-08 11:00:20.158896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:59.684 [2024-10-08 11:00:20.158904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:59.684 [2024-10-08 11:00:20.158911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:59.684 [2024-10-08 11:00:20.158919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:59.684 [2024-10-08 11:00:20.158926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:59.684 [2024-10-08 11:00:20.158933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:59.684 [2024-10-08 11:00:20.158942] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:59.684 [2024-10-08 11:00:20.158954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:59.684 [2024-10-08 11:00:20.158962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:59.684 [2024-10-08 11:00:20.158969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:59.684 [2024-10-08 11:00:20.158976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:59.684 [2024-10-08 11:00:20.158983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:59.684 [2024-10-08 11:00:20.158990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:59.684 [2024-10-08 11:00:20.158996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:59.684 [2024-10-08 11:00:20.159003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:59.684 [2024-10-08 11:00:20.159010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:59.684 [2024-10-08 11:00:20.159016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:59.684 [2024-10-08 11:00:20.159023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:59.684 [2024-10-08 11:00:20.159030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:59.684 [2024-10-08 11:00:20.159036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:59.684 [2024-10-08 11:00:20.159043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:59.684 [2024-10-08 11:00:20.159050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:59.684 [2024-10-08 11:00:20.159057] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:59.684 [2024-10-08 11:00:20.159067] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:59.685 [2024-10-08 11:00:20.159075] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:59.685 [2024-10-08 11:00:20.159081] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:59.685 [2024-10-08 11:00:20.159089] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:59.685 [2024-10-08 11:00:20.159096] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:59.685 [2024-10-08 11:00:20.159103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.685 [2024-10-08 11:00:20.159110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:59.685 [2024-10-08 11:00:20.159118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.664 ms 00:28:59.685 [2024-10-08 11:00:20.159124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.685 [2024-10-08 11:00:20.173928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.685 [2024-10-08 11:00:20.173968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:59.685 [2024-10-08 11:00:20.173985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.742 ms 00:28:59.685 [2024-10-08 11:00:20.173994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.685 [2024-10-08 11:00:20.174106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.685 [2024-10-08 11:00:20.174117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:59.685 [2024-10-08 11:00:20.174133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:28:59.685 [2024-10-08 11:00:20.174142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.685 [2024-10-08 11:00:20.182406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.685 [2024-10-08 11:00:20.182439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:59.685 [2024-10-08 11:00:20.182449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.203 ms 00:28:59.685 [2024-10-08 11:00:20.182464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.685 [2024-10-08 11:00:20.182492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.685 [2024-10-08 11:00:20.182500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:59.685 [2024-10-08 11:00:20.182507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:28:59.685 [2024-10-08 11:00:20.182514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.685 [2024-10-08 11:00:20.182579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.685 [2024-10-08 11:00:20.182588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:59.685 [2024-10-08 11:00:20.182605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:28:59.685 [2024-10-08 11:00:20.182612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.685 [2024-10-08 11:00:20.182718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.685 [2024-10-08 11:00:20.182726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:59.685 [2024-10-08 11:00:20.182736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:28:59.685 [2024-10-08 11:00:20.182743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.685 [2024-10-08 11:00:20.187130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.685 [2024-10-08 11:00:20.187158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:59.685 [2024-10-08 11:00:20.187172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.366 ms 00:28:59.685 [2024-10-08 11:00:20.187181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.685 [2024-10-08 11:00:20.187280] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:28:59.685 [2024-10-08 11:00:20.187293] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:59.685 [2024-10-08 11:00:20.187301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.685 [2024-10-08 11:00:20.187309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:59.685 [2024-10-08 11:00:20.187316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:28:59.685 [2024-10-08 11:00:20.187323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.685 [2024-10-08 11:00:20.199576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.685 [2024-10-08 11:00:20.199613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:59.685 [2024-10-08 11:00:20.199622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.236 ms 00:28:59.685 [2024-10-08 11:00:20.199636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.685 [2024-10-08 11:00:20.199744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.685 [2024-10-08 11:00:20.199752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:59.685 [2024-10-08 11:00:20.199763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:28:59.685 [2024-10-08 11:00:20.199773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.685 [2024-10-08 11:00:20.199830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.685 [2024-10-08 11:00:20.199840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:59.685 [2024-10-08 11:00:20.199851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:28:59.685 [2024-10-08 11:00:20.199858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.685 [2024-10-08 11:00:20.200148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.685 [2024-10-08 11:00:20.200158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:59.685 [2024-10-08 11:00:20.200165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:28:59.685 [2024-10-08 11:00:20.200172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.685 [2024-10-08 11:00:20.200185] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:28:59.685 [2024-10-08 11:00:20.200198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.685 [2024-10-08 11:00:20.200207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:59.685 [2024-10-08 11:00:20.200215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:59.685 [2024-10-08 11:00:20.200224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.685 [2024-10-08 11:00:20.208052] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:59.685 [2024-10-08 11:00:20.208168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.685 [2024-10-08 11:00:20.208177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:59.685 [2024-10-08 11:00:20.208185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.927 ms 00:28:59.685 [2024-10-08 11:00:20.208192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.685 [2024-10-08 11:00:20.210648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.685 [2024-10-08 11:00:20.210763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:59.685 [2024-10-08 11:00:20.210777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.439 ms 00:28:59.685 [2024-10-08 11:00:20.210808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.685 [2024-10-08 11:00:20.210856] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:28:59.685 [2024-10-08 11:00:20.211422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.685 [2024-10-08 11:00:20.211440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:59.685 [2024-10-08 11:00:20.211450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:28:59.685 [2024-10-08 11:00:20.211461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.685 [2024-10-08 11:00:20.211500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.685 [2024-10-08 11:00:20.211508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:59.685 [2024-10-08 11:00:20.211517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:59.685 [2024-10-08 11:00:20.211528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.685 [2024-10-08 11:00:20.211557] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:59.685 [2024-10-08 11:00:20.211575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.685 [2024-10-08 11:00:20.211583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:59.685 [2024-10-08 11:00:20.211597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:28:59.685 [2024-10-08 11:00:20.211605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.685 [2024-10-08 11:00:20.215583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.685 [2024-10-08 11:00:20.215617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:59.685 [2024-10-08 11:00:20.215626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.962 ms 00:28:59.685 [2024-10-08 11:00:20.215633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.685 [2024-10-08 11:00:20.215697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.685 [2024-10-08 11:00:20.215708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:59.685 [2024-10-08 11:00:20.215715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:28:59.685 [2024-10-08 11:00:20.215722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.685 [2024-10-08 11:00:20.216550] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 60.977 ms, result 0 00:29:01.073  [2024-10-08T11:00:22.594Z] Copying: 23/1024 [MB] (23 MBps) [2024-10-08T11:00:23.536Z] Copying: 47/1024 [MB] (23 MBps) [2024-10-08T11:00:24.479Z] Copying: 71/1024 [MB] (23 MBps) [2024-10-08T11:00:25.424Z] Copying: 92/1024 [MB] (20 MBps) [2024-10-08T11:00:26.808Z] Copying: 110/1024 [MB] (18 MBps) [2024-10-08T11:00:27.752Z] Copying: 124/1024 [MB] (13 MBps) [2024-10-08T11:00:28.695Z] Copying: 146/1024 [MB] (21 MBps) [2024-10-08T11:00:29.638Z] Copying: 158/1024 [MB] (12 MBps) [2024-10-08T11:00:30.580Z] Copying: 177/1024 [MB] (19 MBps) [2024-10-08T11:00:31.523Z] Copying: 189/1024 [MB] (11 MBps) [2024-10-08T11:00:32.474Z] Copying: 200/1024 [MB] (11 MBps) [2024-10-08T11:00:33.417Z] Copying: 212/1024 [MB] (11 MBps) [2024-10-08T11:00:34.805Z] Copying: 224/1024 [MB] (11 MBps) [2024-10-08T11:00:35.750Z] Copying: 236/1024 [MB] (11 MBps) [2024-10-08T11:00:36.694Z] Copying: 248/1024 [MB] (11 MBps) [2024-10-08T11:00:37.638Z] Copying: 259/1024 [MB] (11 MBps) [2024-10-08T11:00:38.579Z] Copying: 271/1024 [MB] (11 MBps) [2024-10-08T11:00:39.522Z] Copying: 282/1024 [MB] (11 MBps) [2024-10-08T11:00:40.467Z] Copying: 293/1024 [MB] (11 MBps) [2024-10-08T11:00:41.409Z] Copying: 305/1024 [MB] (11 MBps) [2024-10-08T11:00:42.794Z] Copying: 316/1024 [MB] (11 MBps) [2024-10-08T11:00:43.738Z] Copying: 327/1024 [MB] (11 MBps) [2024-10-08T11:00:44.709Z] Copying: 339/1024 [MB] (11 MBps) [2024-10-08T11:00:45.654Z] Copying: 352/1024 [MB] (13 MBps) [2024-10-08T11:00:46.599Z] Copying: 368/1024 [MB] (16 MBps) [2024-10-08T11:00:47.544Z] Copying: 384/1024 [MB] (15 MBps) [2024-10-08T11:00:48.498Z] Copying: 401/1024 [MB] (17 MBps) [2024-10-08T11:00:49.442Z] Copying: 415/1024 [MB] (13 MBps) [2024-10-08T11:00:50.830Z] Copying: 433/1024 [MB] (18 MBps) [2024-10-08T11:00:51.403Z] Copying: 454/1024 [MB] (20 MBps) [2024-10-08T11:00:52.788Z] Copying: 477/1024 [MB] (22 MBps) [2024-10-08T11:00:53.731Z] Copying: 504/1024 [MB] (27 MBps) [2024-10-08T11:00:54.677Z] Copying: 516/1024 [MB] (11 MBps) [2024-10-08T11:00:55.620Z] Copying: 537/1024 [MB] (20 MBps) [2024-10-08T11:00:56.568Z] Copying: 558/1024 [MB] (21 MBps) [2024-10-08T11:00:57.514Z] Copying: 578/1024 [MB] (19 MBps) [2024-10-08T11:00:58.458Z] Copying: 597/1024 [MB] (18 MBps) [2024-10-08T11:00:59.402Z] Copying: 611/1024 [MB] (14 MBps) [2024-10-08T11:01:00.789Z] Copying: 625/1024 [MB] (13 MBps) [2024-10-08T11:01:01.730Z] Copying: 641/1024 [MB] (16 MBps) [2024-10-08T11:01:02.674Z] Copying: 664/1024 [MB] (22 MBps) [2024-10-08T11:01:03.617Z] Copying: 688/1024 [MB] (24 MBps) [2024-10-08T11:01:04.558Z] Copying: 699/1024 [MB] (11 MBps) [2024-10-08T11:01:05.502Z] Copying: 721/1024 [MB] (21 MBps) [2024-10-08T11:01:06.443Z] Copying: 735/1024 [MB] (14 MBps) [2024-10-08T11:01:07.875Z] Copying: 748/1024 [MB] (13 MBps) [2024-10-08T11:01:08.468Z] Copying: 760/1024 [MB] (12 MBps) [2024-10-08T11:01:09.412Z] Copying: 772/1024 [MB] (11 MBps) [2024-10-08T11:01:10.800Z] Copying: 783/1024 [MB] (11 MBps) [2024-10-08T11:01:11.744Z] Copying: 794/1024 [MB] (10 MBps) [2024-10-08T11:01:12.689Z] Copying: 805/1024 [MB] (10 MBps) [2024-10-08T11:01:13.634Z] Copying: 815/1024 [MB] (10 MBps) [2024-10-08T11:01:14.601Z] Copying: 826/1024 [MB] (11 MBps) [2024-10-08T11:01:15.547Z] Copying: 837/1024 [MB] (10 MBps) [2024-10-08T11:01:16.496Z] Copying: 848/1024 [MB] (10 MBps) [2024-10-08T11:01:17.440Z] Copying: 859/1024 [MB] (10 MBps) [2024-10-08T11:01:18.822Z] Copying: 870/1024 [MB] (11 MBps) [2024-10-08T11:01:19.394Z] Copying: 888/1024 [MB] (18 MBps) [2024-10-08T11:01:20.783Z] Copying: 901/1024 [MB] (13 MBps) [2024-10-08T11:01:21.747Z] Copying: 917/1024 [MB] (16 MBps) [2024-10-08T11:01:22.692Z] Copying: 941/1024 [MB] (23 MBps) [2024-10-08T11:01:23.636Z] Copying: 954/1024 [MB] (12 MBps) [2024-10-08T11:01:24.580Z] Copying: 971/1024 [MB] (17 MBps) [2024-10-08T11:01:25.525Z] Copying: 994/1024 [MB] (23 MBps) [2024-10-08T11:01:26.098Z] Copying: 1017/1024 [MB] (22 MBps) [2024-10-08T11:01:26.098Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-10-08 11:01:26.036911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.521 [2024-10-08 11:01:26.037017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:05.521 [2024-10-08 11:01:26.037042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:05.521 [2024-10-08 11:01:26.037055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.521 [2024-10-08 11:01:26.037101] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:05.521 [2024-10-08 11:01:26.038028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.521 [2024-10-08 11:01:26.038080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:05.521 [2024-10-08 11:01:26.038099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.903 ms 00:30:05.521 [2024-10-08 11:01:26.038113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.521 [2024-10-08 11:01:26.038503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.521 [2024-10-08 11:01:26.038536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:05.521 [2024-10-08 11:01:26.038551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:30:05.521 [2024-10-08 11:01:26.038564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.521 [2024-10-08 11:01:26.038615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.521 [2024-10-08 11:01:26.038629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:05.521 [2024-10-08 11:01:26.038642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:05.521 [2024-10-08 11:01:26.038661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.521 [2024-10-08 11:01:26.038744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.521 [2024-10-08 11:01:26.038770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:05.521 [2024-10-08 11:01:26.038788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:30:05.522 [2024-10-08 11:01:26.038823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.522 [2024-10-08 11:01:26.038847] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:05.522 [2024-10-08 11:01:26.038867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:30:05.522 [2024-10-08 11:01:26.038883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.038896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.038908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.038920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.038933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.038946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.038959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.038972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.038985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.038997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.039496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.039520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.039533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.039546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.039559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.039573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.039586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.039598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.039612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.039624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.039636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.039649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.039663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.039676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.039689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.039702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.039715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.039728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.039741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.039753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.039766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.039930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.039944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.039957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.039969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.039982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:05.522 [2024-10-08 11:01:26.040752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:05.523 [2024-10-08 11:01:26.040765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:05.523 [2024-10-08 11:01:26.040777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:05.523 [2024-10-08 11:01:26.040790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:05.523 [2024-10-08 11:01:26.040817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:05.523 [2024-10-08 11:01:26.040830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:05.523 [2024-10-08 11:01:26.040843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:05.523 [2024-10-08 11:01:26.040856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:05.523 [2024-10-08 11:01:26.040869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:05.523 [2024-10-08 11:01:26.040881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:05.523 [2024-10-08 11:01:26.040894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:05.523 [2024-10-08 11:01:26.040907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:05.523 [2024-10-08 11:01:26.040919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:05.523 [2024-10-08 11:01:26.040931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:05.523 [2024-10-08 11:01:26.040956] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:05.523 [2024-10-08 11:01:26.040976] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 849f357b-59b2-4ad1-93bf-cdd686aa264c 00:30:05.523 [2024-10-08 11:01:26.040989] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:30:05.523 [2024-10-08 11:01:26.041001] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1568 00:30:05.523 [2024-10-08 11:01:26.041013] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 1536 00:30:05.523 [2024-10-08 11:01:26.041025] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0208 00:30:05.523 [2024-10-08 11:01:26.041037] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:05.523 [2024-10-08 11:01:26.041053] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:05.523 [2024-10-08 11:01:26.041065] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:05.523 [2024-10-08 11:01:26.041076] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:05.523 [2024-10-08 11:01:26.041086] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:05.523 [2024-10-08 11:01:26.041098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.523 [2024-10-08 11:01:26.041110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:05.523 [2024-10-08 11:01:26.041123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.252 ms 00:30:05.523 [2024-10-08 11:01:26.041134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.523 [2024-10-08 11:01:26.044147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.523 [2024-10-08 11:01:26.044254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:05.523 [2024-10-08 11:01:26.044272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.988 ms 00:30:05.523 [2024-10-08 11:01:26.044291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.523 [2024-10-08 11:01:26.044442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:05.523 [2024-10-08 11:01:26.044455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:05.523 [2024-10-08 11:01:26.044469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:30:05.523 [2024-10-08 11:01:26.044480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.523 [2024-10-08 11:01:26.051958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.523 [2024-10-08 11:01:26.052013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:05.523 [2024-10-08 11:01:26.052025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.523 [2024-10-08 11:01:26.052033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.523 [2024-10-08 11:01:26.052098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.523 [2024-10-08 11:01:26.052107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:05.523 [2024-10-08 11:01:26.052116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.523 [2024-10-08 11:01:26.052124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.523 [2024-10-08 11:01:26.052188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.523 [2024-10-08 11:01:26.052200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:05.523 [2024-10-08 11:01:26.052216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.523 [2024-10-08 11:01:26.052224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.523 [2024-10-08 11:01:26.052242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.523 [2024-10-08 11:01:26.052251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:05.523 [2024-10-08 11:01:26.052260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.523 [2024-10-08 11:01:26.052268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.523 [2024-10-08 11:01:26.066250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.523 [2024-10-08 11:01:26.066303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:05.523 [2024-10-08 11:01:26.066325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.523 [2024-10-08 11:01:26.066335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.523 [2024-10-08 11:01:26.077323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.523 [2024-10-08 11:01:26.077373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:05.523 [2024-10-08 11:01:26.077384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.523 [2024-10-08 11:01:26.077392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.523 [2024-10-08 11:01:26.077439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.523 [2024-10-08 11:01:26.077449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:05.523 [2024-10-08 11:01:26.077457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.523 [2024-10-08 11:01:26.077472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.523 [2024-10-08 11:01:26.077509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.523 [2024-10-08 11:01:26.077518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:05.523 [2024-10-08 11:01:26.077526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.523 [2024-10-08 11:01:26.077534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.523 [2024-10-08 11:01:26.077591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.523 [2024-10-08 11:01:26.077600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:05.523 [2024-10-08 11:01:26.077609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.523 [2024-10-08 11:01:26.077616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.523 [2024-10-08 11:01:26.077643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.523 [2024-10-08 11:01:26.077652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:05.523 [2024-10-08 11:01:26.077660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.523 [2024-10-08 11:01:26.077667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.523 [2024-10-08 11:01:26.077709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.523 [2024-10-08 11:01:26.077718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:05.523 [2024-10-08 11:01:26.077733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.523 [2024-10-08 11:01:26.077744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.523 [2024-10-08 11:01:26.077790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:05.523 [2024-10-08 11:01:26.077824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:05.523 [2024-10-08 11:01:26.077833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:05.523 [2024-10-08 11:01:26.077841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:05.523 [2024-10-08 11:01:26.077967] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 41.043 ms, result 0 00:30:05.784 00:30:05.784 00:30:05.784 11:01:26 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:08.330 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:08.330 11:01:28 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:30:08.330 11:01:28 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:30:08.330 11:01:28 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:08.330 11:01:28 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:08.330 11:01:28 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:08.330 11:01:28 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 93779 00:30:08.330 11:01:28 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 93779 ']' 00:30:08.330 11:01:28 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 93779 00:30:08.330 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (93779) - No such process 00:30:08.330 Process with pid 93779 is not found 00:30:08.330 Remove shared memory files 00:30:08.330 11:01:28 ftl.ftl_restore_fast -- common/autotest_common.sh@977 -- # echo 'Process with pid 93779 is not found' 00:30:08.330 11:01:28 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:30:08.330 11:01:28 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:08.330 11:01:28 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:30:08.330 11:01:28 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_849f357b-59b2-4ad1-93bf-cdd686aa264c_band_md /dev/hugepages/ftl_849f357b-59b2-4ad1-93bf-cdd686aa264c_l2p_l1 /dev/hugepages/ftl_849f357b-59b2-4ad1-93bf-cdd686aa264c_l2p_l2 /dev/hugepages/ftl_849f357b-59b2-4ad1-93bf-cdd686aa264c_l2p_l2_ctx /dev/hugepages/ftl_849f357b-59b2-4ad1-93bf-cdd686aa264c_nvc_md /dev/hugepages/ftl_849f357b-59b2-4ad1-93bf-cdd686aa264c_p2l_pool /dev/hugepages/ftl_849f357b-59b2-4ad1-93bf-cdd686aa264c_sb /dev/hugepages/ftl_849f357b-59b2-4ad1-93bf-cdd686aa264c_sb_shm /dev/hugepages/ftl_849f357b-59b2-4ad1-93bf-cdd686aa264c_trim_bitmap /dev/hugepages/ftl_849f357b-59b2-4ad1-93bf-cdd686aa264c_trim_log /dev/hugepages/ftl_849f357b-59b2-4ad1-93bf-cdd686aa264c_trim_md /dev/hugepages/ftl_849f357b-59b2-4ad1-93bf-cdd686aa264c_vmap 00:30:08.330 11:01:28 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:30:08.330 11:01:28 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:08.330 11:01:28 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:30:08.330 00:30:08.330 real 3m41.445s 00:30:08.330 user 3m31.736s 00:30:08.330 sys 0m10.621s 00:30:08.330 11:01:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:08.330 ************************************ 00:30:08.330 END TEST ftl_restore_fast 00:30:08.330 11:01:28 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:30:08.330 ************************************ 00:30:08.330 11:01:28 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:30:08.330 11:01:28 ftl -- ftl/ftl.sh@14 -- # killprocess 85112 00:30:08.330 11:01:28 ftl -- common/autotest_common.sh@950 -- # '[' -z 85112 ']' 00:30:08.330 Process with pid 85112 is not found 00:30:08.330 11:01:28 ftl -- common/autotest_common.sh@954 -- # kill -0 85112 00:30:08.330 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (85112) - No such process 00:30:08.330 11:01:28 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 85112 is not found' 00:30:08.330 11:01:28 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:30:08.330 11:01:28 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=96065 00:30:08.330 11:01:28 ftl -- ftl/ftl.sh@20 -- # waitforlisten 96065 00:30:08.330 11:01:28 ftl -- common/autotest_common.sh@831 -- # '[' -z 96065 ']' 00:30:08.330 11:01:28 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:08.330 11:01:28 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:08.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:08.330 11:01:28 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:08.330 11:01:28 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:08.330 11:01:28 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:08.330 11:01:28 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:08.330 [2024-10-08 11:01:28.685218] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:30:08.330 [2024-10-08 11:01:28.685335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96065 ] 00:30:08.330 [2024-10-08 11:01:28.813640] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:08.330 [2024-10-08 11:01:28.831574] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.330 [2024-10-08 11:01:28.864945] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:09.271 11:01:29 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:09.271 11:01:29 ftl -- common/autotest_common.sh@864 -- # return 0 00:30:09.271 11:01:29 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:09.271 nvme0n1 00:30:09.271 11:01:29 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:30:09.271 11:01:29 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:09.271 11:01:29 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:09.532 11:01:29 ftl -- ftl/common.sh@28 -- # stores=ec1eeb8d-64a6-40b4-a626-5f70d49df9f2 00:30:09.532 11:01:29 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:30:09.532 11:01:29 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ec1eeb8d-64a6-40b4-a626-5f70d49df9f2 00:30:09.794 11:01:30 ftl -- ftl/ftl.sh@23 -- # killprocess 96065 00:30:09.794 11:01:30 ftl -- common/autotest_common.sh@950 -- # '[' -z 96065 ']' 00:30:09.794 11:01:30 ftl -- common/autotest_common.sh@954 -- # kill -0 96065 00:30:09.794 11:01:30 ftl -- common/autotest_common.sh@955 -- # uname 00:30:09.794 11:01:30 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:09.794 11:01:30 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96065 00:30:09.794 11:01:30 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:09.794 11:01:30 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:09.794 11:01:30 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96065' 00:30:09.794 killing process with pid 96065 00:30:09.794 11:01:30 ftl -- common/autotest_common.sh@969 -- # kill 96065 00:30:09.794 11:01:30 ftl -- common/autotest_common.sh@974 -- # wait 96065 00:30:10.055 11:01:30 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:10.317 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:10.317 Waiting for block devices as requested 00:30:10.317 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:10.317 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:10.578 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:30:10.578 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:30:15.865 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:30:15.865 11:01:36 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:30:15.865 Remove shared memory files 00:30:15.865 11:01:36 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:15.865 11:01:36 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:30:15.865 11:01:36 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:30:15.865 11:01:36 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:30:15.865 11:01:36 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:15.865 11:01:36 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:30:15.865 ************************************ 00:30:15.865 END TEST ftl 00:30:15.865 ************************************ 00:30:15.865 00:30:15.865 real 16m13.382s 00:30:15.865 user 18m4.398s 00:30:15.865 sys 1m13.010s 00:30:15.865 11:01:36 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:15.865 11:01:36 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:15.865 11:01:36 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:30:15.865 11:01:36 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:30:15.865 11:01:36 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:30:15.865 11:01:36 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:30:15.865 11:01:36 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:30:15.865 11:01:36 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:30:15.865 11:01:36 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:30:15.865 11:01:36 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:30:15.865 11:01:36 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:30:15.865 11:01:36 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:30:15.865 11:01:36 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:15.865 11:01:36 -- common/autotest_common.sh@10 -- # set +x 00:30:15.865 11:01:36 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:30:15.865 11:01:36 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:30:15.865 11:01:36 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:30:15.865 11:01:36 -- common/autotest_common.sh@10 -- # set +x 00:30:17.249 INFO: APP EXITING 00:30:17.249 INFO: killing all VMs 00:30:17.249 INFO: killing vhost app 00:30:17.249 INFO: EXIT DONE 00:30:17.249 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:17.822 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:30:17.822 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:30:17.822 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:30:17.822 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:30:18.082 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:18.342 Cleaning 00:30:18.342 Removing: /var/run/dpdk/spdk0/config 00:30:18.342 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:18.342 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:18.342 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:18.342 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:18.342 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:18.342 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:18.342 Removing: /var/run/dpdk/spdk0 00:30:18.342 Removing: /var/run/dpdk/spdk_pid70599 00:30:18.342 Removing: /var/run/dpdk/spdk_pid70763 00:30:18.342 Removing: /var/run/dpdk/spdk_pid70964 00:30:18.342 Removing: /var/run/dpdk/spdk_pid71046 00:30:18.342 Removing: /var/run/dpdk/spdk_pid71074 00:30:18.603 Removing: /var/run/dpdk/spdk_pid71181 00:30:18.603 Removing: /var/run/dpdk/spdk_pid71199 00:30:18.603 Removing: /var/run/dpdk/spdk_pid71381 00:30:18.603 Removing: /var/run/dpdk/spdk_pid71455 00:30:18.603 Removing: /var/run/dpdk/spdk_pid71540 00:30:18.603 Removing: /var/run/dpdk/spdk_pid71634 00:30:18.603 Removing: /var/run/dpdk/spdk_pid71715 00:30:18.603 Removing: /var/run/dpdk/spdk_pid71754 00:30:18.603 Removing: /var/run/dpdk/spdk_pid71791 00:30:18.603 Removing: /var/run/dpdk/spdk_pid71861 00:30:18.603 Removing: /var/run/dpdk/spdk_pid71962 00:30:18.603 Removing: /var/run/dpdk/spdk_pid72387 00:30:18.603 Removing: /var/run/dpdk/spdk_pid72429 00:30:18.603 Removing: /var/run/dpdk/spdk_pid72481 00:30:18.603 Removing: /var/run/dpdk/spdk_pid72491 00:30:18.603 Removing: /var/run/dpdk/spdk_pid72554 00:30:18.603 Removing: /var/run/dpdk/spdk_pid72560 00:30:18.603 Removing: /var/run/dpdk/spdk_pid72618 00:30:18.603 Removing: /var/run/dpdk/spdk_pid72634 00:30:18.603 Removing: /var/run/dpdk/spdk_pid72686 00:30:18.603 Removing: /var/run/dpdk/spdk_pid72694 00:30:18.603 Removing: /var/run/dpdk/spdk_pid72736 00:30:18.603 Removing: /var/run/dpdk/spdk_pid72754 00:30:18.603 Removing: /var/run/dpdk/spdk_pid72881 00:30:18.603 Removing: /var/run/dpdk/spdk_pid72923 00:30:18.603 Removing: /var/run/dpdk/spdk_pid73001 00:30:18.603 Removing: /var/run/dpdk/spdk_pid73162 00:30:18.603 Removing: /var/run/dpdk/spdk_pid73235 00:30:18.603 Removing: /var/run/dpdk/spdk_pid73266 00:30:18.603 Removing: /var/run/dpdk/spdk_pid73676 00:30:18.603 Removing: /var/run/dpdk/spdk_pid73769 00:30:18.603 Removing: /var/run/dpdk/spdk_pid73858 00:30:18.603 Removing: /var/run/dpdk/spdk_pid73900 00:30:18.603 Removing: /var/run/dpdk/spdk_pid73931 00:30:18.603 Removing: /var/run/dpdk/spdk_pid74004 00:30:18.603 Removing: /var/run/dpdk/spdk_pid74617 00:30:18.603 Removing: /var/run/dpdk/spdk_pid74643 00:30:18.603 Removing: /var/run/dpdk/spdk_pid75103 00:30:18.603 Removing: /var/run/dpdk/spdk_pid75190 00:30:18.603 Removing: /var/run/dpdk/spdk_pid75295 00:30:18.603 Removing: /var/run/dpdk/spdk_pid75337 00:30:18.603 Removing: /var/run/dpdk/spdk_pid75357 00:30:18.603 Removing: /var/run/dpdk/spdk_pid75383 00:30:18.603 Removing: /var/run/dpdk/spdk_pid77201 00:30:18.603 Removing: /var/run/dpdk/spdk_pid77321 00:30:18.603 Removing: /var/run/dpdk/spdk_pid77331 00:30:18.603 Removing: /var/run/dpdk/spdk_pid77343 00:30:18.603 Removing: /var/run/dpdk/spdk_pid77382 00:30:18.603 Removing: /var/run/dpdk/spdk_pid77386 00:30:18.603 Removing: /var/run/dpdk/spdk_pid77398 00:30:18.603 Removing: /var/run/dpdk/spdk_pid77443 00:30:18.603 Removing: /var/run/dpdk/spdk_pid77447 00:30:18.603 Removing: /var/run/dpdk/spdk_pid77459 00:30:18.603 Removing: /var/run/dpdk/spdk_pid77504 00:30:18.603 Removing: /var/run/dpdk/spdk_pid77508 00:30:18.603 Removing: /var/run/dpdk/spdk_pid77520 00:30:18.603 Removing: /var/run/dpdk/spdk_pid78882 00:30:18.603 Removing: /var/run/dpdk/spdk_pid78968 00:30:18.603 Removing: /var/run/dpdk/spdk_pid80364 00:30:18.603 Removing: /var/run/dpdk/spdk_pid81717 00:30:18.603 Removing: /var/run/dpdk/spdk_pid81777 00:30:18.603 Removing: /var/run/dpdk/spdk_pid81832 00:30:18.603 Removing: /var/run/dpdk/spdk_pid81892 00:30:18.603 Removing: /var/run/dpdk/spdk_pid81969 00:30:18.603 Removing: /var/run/dpdk/spdk_pid82033 00:30:18.603 Removing: /var/run/dpdk/spdk_pid82175 00:30:18.603 Removing: /var/run/dpdk/spdk_pid82522 00:30:18.603 Removing: /var/run/dpdk/spdk_pid82548 00:30:18.603 Removing: /var/run/dpdk/spdk_pid82991 00:30:18.603 Removing: /var/run/dpdk/spdk_pid83171 00:30:18.603 Removing: /var/run/dpdk/spdk_pid83265 00:30:18.603 Removing: /var/run/dpdk/spdk_pid83369 00:30:18.603 Removing: /var/run/dpdk/spdk_pid83410 00:30:18.603 Removing: /var/run/dpdk/spdk_pid83431 00:30:18.603 Removing: /var/run/dpdk/spdk_pid83718 00:30:18.603 Removing: /var/run/dpdk/spdk_pid83756 00:30:18.603 Removing: /var/run/dpdk/spdk_pid83812 00:30:18.603 Removing: /var/run/dpdk/spdk_pid84175 00:30:18.603 Removing: /var/run/dpdk/spdk_pid84319 00:30:18.603 Removing: /var/run/dpdk/spdk_pid85112 00:30:18.603 Removing: /var/run/dpdk/spdk_pid85228 00:30:18.603 Removing: /var/run/dpdk/spdk_pid85410 00:30:18.604 Removing: /var/run/dpdk/spdk_pid85513 00:30:18.604 Removing: /var/run/dpdk/spdk_pid85804 00:30:18.604 Removing: /var/run/dpdk/spdk_pid86077 00:30:18.604 Removing: /var/run/dpdk/spdk_pid86412 00:30:18.604 Removing: /var/run/dpdk/spdk_pid86583 00:30:18.604 Removing: /var/run/dpdk/spdk_pid86713 00:30:18.604 Removing: /var/run/dpdk/spdk_pid86749 00:30:18.604 Removing: /var/run/dpdk/spdk_pid86969 00:30:18.604 Removing: /var/run/dpdk/spdk_pid86990 00:30:18.604 Removing: /var/run/dpdk/spdk_pid87029 00:30:18.604 Removing: /var/run/dpdk/spdk_pid87300 00:30:18.604 Removing: /var/run/dpdk/spdk_pid87519 00:30:18.604 Removing: /var/run/dpdk/spdk_pid88141 00:30:18.604 Removing: /var/run/dpdk/spdk_pid88975 00:30:18.604 Removing: /var/run/dpdk/spdk_pid89728 00:30:18.604 Removing: /var/run/dpdk/spdk_pid90638 00:30:18.604 Removing: /var/run/dpdk/spdk_pid90769 00:30:18.604 Removing: /var/run/dpdk/spdk_pid90840 00:30:18.604 Removing: /var/run/dpdk/spdk_pid91192 00:30:18.604 Removing: /var/run/dpdk/spdk_pid91236 00:30:18.604 Removing: /var/run/dpdk/spdk_pid91808 00:30:18.604 Removing: /var/run/dpdk/spdk_pid92505 00:30:18.604 Removing: /var/run/dpdk/spdk_pid92888 00:30:18.604 Removing: /var/run/dpdk/spdk_pid92993 00:30:18.604 Removing: /var/run/dpdk/spdk_pid93024 00:30:18.604 Removing: /var/run/dpdk/spdk_pid93077 00:30:18.604 Removing: /var/run/dpdk/spdk_pid93123 00:30:18.865 Removing: /var/run/dpdk/spdk_pid93177 00:30:18.865 Removing: /var/run/dpdk/spdk_pid93381 00:30:18.865 Removing: /var/run/dpdk/spdk_pid93432 00:30:18.865 Removing: /var/run/dpdk/spdk_pid93491 00:30:18.865 Removing: /var/run/dpdk/spdk_pid93549 00:30:18.865 Removing: /var/run/dpdk/spdk_pid93565 00:30:18.865 Removing: /var/run/dpdk/spdk_pid93622 00:30:18.865 Removing: /var/run/dpdk/spdk_pid93779 00:30:18.865 Removing: /var/run/dpdk/spdk_pid93982 00:30:18.865 Removing: /var/run/dpdk/spdk_pid94412 00:30:18.865 Removing: /var/run/dpdk/spdk_pid94820 00:30:18.865 Removing: /var/run/dpdk/spdk_pid95348 00:30:18.865 Removing: /var/run/dpdk/spdk_pid96065 00:30:18.865 Clean 00:30:18.865 11:01:39 -- common/autotest_common.sh@1451 -- # return 0 00:30:18.865 11:01:39 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:30:18.865 11:01:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:18.865 11:01:39 -- common/autotest_common.sh@10 -- # set +x 00:30:18.865 11:01:39 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:30:18.865 11:01:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:18.865 11:01:39 -- common/autotest_common.sh@10 -- # set +x 00:30:18.865 11:01:39 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:18.865 11:01:39 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:30:18.865 11:01:39 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:30:18.865 11:01:39 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:30:18.865 11:01:39 -- spdk/autotest.sh@394 -- # hostname 00:30:18.865 11:01:39 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:30:19.126 geninfo: WARNING: invalid characters removed from testname! 00:30:45.726 11:02:02 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:45.726 11:02:05 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:47.633 11:02:07 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:50.173 11:02:10 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:52.115 11:02:12 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:54.650 11:02:14 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:56.026 11:02:16 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:56.026 11:02:16 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:30:56.026 11:02:16 -- common/autotest_common.sh@1681 -- $ lcov --version 00:30:56.026 11:02:16 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:30:56.026 11:02:16 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:30:56.026 11:02:16 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:30:56.026 11:02:16 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:30:56.026 11:02:16 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:30:56.026 11:02:16 -- scripts/common.sh@336 -- $ IFS=.-: 00:30:56.026 11:02:16 -- scripts/common.sh@336 -- $ read -ra ver1 00:30:56.026 11:02:16 -- scripts/common.sh@337 -- $ IFS=.-: 00:30:56.026 11:02:16 -- scripts/common.sh@337 -- $ read -ra ver2 00:30:56.026 11:02:16 -- scripts/common.sh@338 -- $ local 'op=<' 00:30:56.026 11:02:16 -- scripts/common.sh@340 -- $ ver1_l=2 00:30:56.026 11:02:16 -- scripts/common.sh@341 -- $ ver2_l=1 00:30:56.026 11:02:16 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:30:56.026 11:02:16 -- scripts/common.sh@344 -- $ case "$op" in 00:30:56.026 11:02:16 -- scripts/common.sh@345 -- $ : 1 00:30:56.026 11:02:16 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:30:56.026 11:02:16 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:56.026 11:02:16 -- scripts/common.sh@365 -- $ decimal 1 00:30:56.026 11:02:16 -- scripts/common.sh@353 -- $ local d=1 00:30:56.026 11:02:16 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:30:56.026 11:02:16 -- scripts/common.sh@355 -- $ echo 1 00:30:56.026 11:02:16 -- scripts/common.sh@365 -- $ ver1[v]=1 00:30:56.026 11:02:16 -- scripts/common.sh@366 -- $ decimal 2 00:30:56.026 11:02:16 -- scripts/common.sh@353 -- $ local d=2 00:30:56.026 11:02:16 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:30:56.026 11:02:16 -- scripts/common.sh@355 -- $ echo 2 00:30:56.026 11:02:16 -- scripts/common.sh@366 -- $ ver2[v]=2 00:30:56.026 11:02:16 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:30:56.026 11:02:16 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:30:56.026 11:02:16 -- scripts/common.sh@368 -- $ return 0 00:30:56.026 11:02:16 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:56.026 11:02:16 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:30:56.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.026 --rc genhtml_branch_coverage=1 00:30:56.026 --rc genhtml_function_coverage=1 00:30:56.026 --rc genhtml_legend=1 00:30:56.026 --rc geninfo_all_blocks=1 00:30:56.026 --rc geninfo_unexecuted_blocks=1 00:30:56.026 00:30:56.026 ' 00:30:56.026 11:02:16 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:30:56.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.026 --rc genhtml_branch_coverage=1 00:30:56.026 --rc genhtml_function_coverage=1 00:30:56.026 --rc genhtml_legend=1 00:30:56.026 --rc geninfo_all_blocks=1 00:30:56.026 --rc geninfo_unexecuted_blocks=1 00:30:56.026 00:30:56.026 ' 00:30:56.026 11:02:16 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:30:56.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.026 --rc genhtml_branch_coverage=1 00:30:56.026 --rc genhtml_function_coverage=1 00:30:56.026 --rc genhtml_legend=1 00:30:56.026 --rc geninfo_all_blocks=1 00:30:56.026 --rc geninfo_unexecuted_blocks=1 00:30:56.026 00:30:56.026 ' 00:30:56.026 11:02:16 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:30:56.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.026 --rc genhtml_branch_coverage=1 00:30:56.026 --rc genhtml_function_coverage=1 00:30:56.026 --rc genhtml_legend=1 00:30:56.026 --rc geninfo_all_blocks=1 00:30:56.026 --rc geninfo_unexecuted_blocks=1 00:30:56.026 00:30:56.026 ' 00:30:56.026 11:02:16 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:56.026 11:02:16 -- scripts/common.sh@15 -- $ shopt -s extglob 00:30:56.026 11:02:16 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:56.026 11:02:16 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:56.026 11:02:16 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:56.026 11:02:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.026 11:02:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.026 11:02:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.026 11:02:16 -- paths/export.sh@5 -- $ export PATH 00:30:56.026 11:02:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.026 11:02:16 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:30:56.026 11:02:16 -- common/autobuild_common.sh@486 -- $ date +%s 00:30:56.026 11:02:16 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728385336.XXXXXX 00:30:56.026 11:02:16 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728385336.0yvb6D 00:30:56.026 11:02:16 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:30:56.026 11:02:16 -- common/autobuild_common.sh@492 -- $ '[' -n main ']' 00:30:56.026 11:02:16 -- common/autobuild_common.sh@493 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:30:56.026 11:02:16 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:30:56.026 11:02:16 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:30:56.026 11:02:16 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:30:56.026 11:02:16 -- common/autobuild_common.sh@502 -- $ get_config_params 00:30:56.026 11:02:16 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:30:56.026 11:02:16 -- common/autotest_common.sh@10 -- $ set +x 00:30:56.026 11:02:16 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:30:56.026 11:02:16 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:30:56.026 11:02:16 -- pm/common@17 -- $ local monitor 00:30:56.026 11:02:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:56.026 11:02:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:56.026 11:02:16 -- pm/common@25 -- $ sleep 1 00:30:56.026 11:02:16 -- pm/common@21 -- $ date +%s 00:30:56.026 11:02:16 -- pm/common@21 -- $ date +%s 00:30:56.026 11:02:16 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728385336 00:30:56.026 11:02:16 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728385336 00:30:56.026 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728385336_collect-vmstat.pm.log 00:30:56.026 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728385336_collect-cpu-load.pm.log 00:30:56.968 11:02:17 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:30:56.968 11:02:17 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:30:56.968 11:02:17 -- spdk/autopackage.sh@14 -- $ timing_finish 00:30:56.968 11:02:17 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:56.968 11:02:17 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:30:56.968 11:02:17 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:56.968 11:02:17 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:56.968 11:02:17 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:56.968 11:02:17 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:56.968 11:02:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:56.968 11:02:17 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:30:56.968 11:02:17 -- pm/common@44 -- $ pid=97796 00:30:56.968 11:02:17 -- pm/common@50 -- $ kill -TERM 97796 00:30:57.228 11:02:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:57.228 11:02:17 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:30:57.228 11:02:17 -- pm/common@44 -- $ pid=97797 00:30:57.228 11:02:17 -- pm/common@50 -- $ kill -TERM 97797 00:30:57.228 + [[ -n 5761 ]] 00:30:57.228 + sudo kill 5761 00:30:57.239 [Pipeline] } 00:30:57.255 [Pipeline] // timeout 00:30:57.261 [Pipeline] } 00:30:57.275 [Pipeline] // stage 00:30:57.281 [Pipeline] } 00:30:57.295 [Pipeline] // catchError 00:30:57.305 [Pipeline] stage 00:30:57.307 [Pipeline] { (Stop VM) 00:30:57.320 [Pipeline] sh 00:30:57.604 + vagrant halt 00:31:00.155 ==> default: Halting domain... 00:31:05.452 [Pipeline] sh 00:31:05.760 + vagrant destroy -f 00:31:08.302 ==> default: Removing domain... 00:31:08.887 [Pipeline] sh 00:31:09.171 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:31:09.181 [Pipeline] } 00:31:09.195 [Pipeline] // stage 00:31:09.200 [Pipeline] } 00:31:09.214 [Pipeline] // dir 00:31:09.219 [Pipeline] } 00:31:09.233 [Pipeline] // wrap 00:31:09.239 [Pipeline] } 00:31:09.252 [Pipeline] // catchError 00:31:09.261 [Pipeline] stage 00:31:09.263 [Pipeline] { (Epilogue) 00:31:09.276 [Pipeline] sh 00:31:09.561 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:14.849 [Pipeline] catchError 00:31:14.851 [Pipeline] { 00:31:14.864 [Pipeline] sh 00:31:15.149 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:15.149 Artifacts sizes are good 00:31:15.159 [Pipeline] } 00:31:15.173 [Pipeline] // catchError 00:31:15.184 [Pipeline] archiveArtifacts 00:31:15.191 Archiving artifacts 00:31:15.307 [Pipeline] cleanWs 00:31:15.319 [WS-CLEANUP] Deleting project workspace... 00:31:15.319 [WS-CLEANUP] Deferred wipeout is used... 00:31:15.327 [WS-CLEANUP] done 00:31:15.329 [Pipeline] } 00:31:15.343 [Pipeline] // stage 00:31:15.348 [Pipeline] } 00:31:15.362 [Pipeline] // node 00:31:15.367 [Pipeline] End of Pipeline 00:31:15.405 Finished: SUCCESS